Dec  7 04:00:39 np0005549474 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  7 04:00:39 np0005549474 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  7 04:00:39 np0005549474 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  7 04:00:39 np0005549474 kernel: BIOS-provided physical RAM map:
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  7 04:00:39 np0005549474 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  7 04:00:39 np0005549474 kernel: NX (Execute Disable) protection: active
Dec  7 04:00:39 np0005549474 kernel: APIC: Static calls initialized
Dec  7 04:00:39 np0005549474 kernel: SMBIOS 2.8 present.
Dec  7 04:00:39 np0005549474 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  7 04:00:39 np0005549474 kernel: Hypervisor detected: KVM
Dec  7 04:00:39 np0005549474 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  7 04:00:39 np0005549474 kernel: kvm-clock: using sched offset of 3431680995 cycles
Dec  7 04:00:39 np0005549474 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  7 04:00:39 np0005549474 kernel: tsc: Detected 2799.998 MHz processor
Dec  7 04:00:39 np0005549474 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  7 04:00:39 np0005549474 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  7 04:00:39 np0005549474 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  7 04:00:39 np0005549474 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  7 04:00:39 np0005549474 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  7 04:00:39 np0005549474 kernel: Using GB pages for direct mapping
Dec  7 04:00:39 np0005549474 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  7 04:00:39 np0005549474 kernel: ACPI: Early table checksum verification disabled
Dec  7 04:00:39 np0005549474 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  7 04:00:39 np0005549474 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 04:00:39 np0005549474 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 04:00:39 np0005549474 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 04:00:39 np0005549474 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  7 04:00:39 np0005549474 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 04:00:39 np0005549474 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 04:00:39 np0005549474 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  7 04:00:39 np0005549474 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  7 04:00:39 np0005549474 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  7 04:00:39 np0005549474 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  7 04:00:39 np0005549474 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  7 04:00:39 np0005549474 kernel: No NUMA configuration found
Dec  7 04:00:39 np0005549474 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  7 04:00:39 np0005549474 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  7 04:00:39 np0005549474 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  7 04:00:39 np0005549474 kernel: Zone ranges:
Dec  7 04:00:39 np0005549474 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  7 04:00:39 np0005549474 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  7 04:00:39 np0005549474 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  7 04:00:39 np0005549474 kernel:  Device   empty
Dec  7 04:00:39 np0005549474 kernel: Movable zone start for each node
Dec  7 04:00:39 np0005549474 kernel: Early memory node ranges
Dec  7 04:00:39 np0005549474 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  7 04:00:39 np0005549474 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  7 04:00:39 np0005549474 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  7 04:00:39 np0005549474 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  7 04:00:39 np0005549474 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  7 04:00:39 np0005549474 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  7 04:00:39 np0005549474 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  7 04:00:39 np0005549474 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  7 04:00:39 np0005549474 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  7 04:00:39 np0005549474 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  7 04:00:39 np0005549474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  7 04:00:39 np0005549474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  7 04:00:39 np0005549474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  7 04:00:39 np0005549474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  7 04:00:39 np0005549474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  7 04:00:39 np0005549474 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  7 04:00:39 np0005549474 kernel: TSC deadline timer available
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Max. logical packages:   8
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Max. logical dies:       8
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Max. dies per package:   1
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Max. threads per core:   1
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Num. cores per package:     1
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Num. threads per package:   1
Dec  7 04:00:39 np0005549474 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  7 04:00:39 np0005549474 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  7 04:00:39 np0005549474 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  7 04:00:39 np0005549474 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  7 04:00:39 np0005549474 kernel: Booting paravirtualized kernel on KVM
Dec  7 04:00:39 np0005549474 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  7 04:00:39 np0005549474 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  7 04:00:39 np0005549474 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  7 04:00:39 np0005549474 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  7 04:00:39 np0005549474 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  7 04:00:39 np0005549474 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  7 04:00:39 np0005549474 kernel: random: crng init done
Dec  7 04:00:39 np0005549474 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: Fallback order for Node 0: 0 
Dec  7 04:00:39 np0005549474 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  7 04:00:39 np0005549474 kernel: Policy zone: Normal
Dec  7 04:00:39 np0005549474 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  7 04:00:39 np0005549474 kernel: software IO TLB: area num 8.
Dec  7 04:00:39 np0005549474 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  7 04:00:39 np0005549474 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  7 04:00:39 np0005549474 kernel: ftrace: allocated 193 pages with 3 groups
Dec  7 04:00:39 np0005549474 kernel: Dynamic Preempt: voluntary
Dec  7 04:00:39 np0005549474 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  7 04:00:39 np0005549474 kernel: rcu: #011RCU event tracing is enabled.
Dec  7 04:00:39 np0005549474 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  7 04:00:39 np0005549474 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  7 04:00:39 np0005549474 kernel: #011Rude variant of Tasks RCU enabled.
Dec  7 04:00:39 np0005549474 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  7 04:00:39 np0005549474 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  7 04:00:39 np0005549474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  7 04:00:39 np0005549474 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  7 04:00:39 np0005549474 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  7 04:00:39 np0005549474 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  7 04:00:39 np0005549474 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  7 04:00:39 np0005549474 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  7 04:00:39 np0005549474 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  7 04:00:39 np0005549474 kernel: Console: colour VGA+ 80x25
Dec  7 04:00:39 np0005549474 kernel: printk: console [ttyS0] enabled
Dec  7 04:00:39 np0005549474 kernel: ACPI: Core revision 20230331
Dec  7 04:00:39 np0005549474 kernel: APIC: Switch to symmetric I/O mode setup
Dec  7 04:00:39 np0005549474 kernel: x2apic enabled
Dec  7 04:00:39 np0005549474 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  7 04:00:39 np0005549474 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  7 04:00:39 np0005549474 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  7 04:00:39 np0005549474 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  7 04:00:39 np0005549474 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  7 04:00:39 np0005549474 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  7 04:00:39 np0005549474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  7 04:00:39 np0005549474 kernel: Spectre V2 : Mitigation: Retpolines
Dec  7 04:00:39 np0005549474 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  7 04:00:39 np0005549474 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  7 04:00:39 np0005549474 kernel: RETBleed: Mitigation: untrained return thunk
Dec  7 04:00:39 np0005549474 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  7 04:00:39 np0005549474 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  7 04:00:39 np0005549474 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  7 04:00:39 np0005549474 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  7 04:00:39 np0005549474 kernel: x86/bugs: return thunk changed
Dec  7 04:00:39 np0005549474 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  7 04:00:39 np0005549474 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  7 04:00:39 np0005549474 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  7 04:00:39 np0005549474 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  7 04:00:39 np0005549474 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  7 04:00:39 np0005549474 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  7 04:00:39 np0005549474 kernel: Freeing SMP alternatives memory: 40K
Dec  7 04:00:39 np0005549474 kernel: pid_max: default: 32768 minimum: 301
Dec  7 04:00:39 np0005549474 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  7 04:00:39 np0005549474 kernel: landlock: Up and running.
Dec  7 04:00:39 np0005549474 kernel: Yama: becoming mindful.
Dec  7 04:00:39 np0005549474 kernel: SELinux:  Initializing.
Dec  7 04:00:39 np0005549474 kernel: LSM support for eBPF active
Dec  7 04:00:39 np0005549474 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  7 04:00:39 np0005549474 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  7 04:00:39 np0005549474 kernel: ... version:                0
Dec  7 04:00:39 np0005549474 kernel: ... bit width:              48
Dec  7 04:00:39 np0005549474 kernel: ... generic registers:      6
Dec  7 04:00:39 np0005549474 kernel: ... value mask:             0000ffffffffffff
Dec  7 04:00:39 np0005549474 kernel: ... max period:             00007fffffffffff
Dec  7 04:00:39 np0005549474 kernel: ... fixed-purpose events:   0
Dec  7 04:00:39 np0005549474 kernel: ... event mask:             000000000000003f
Dec  7 04:00:39 np0005549474 kernel: signal: max sigframe size: 1776
Dec  7 04:00:39 np0005549474 kernel: rcu: Hierarchical SRCU implementation.
Dec  7 04:00:39 np0005549474 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  7 04:00:39 np0005549474 kernel: smp: Bringing up secondary CPUs ...
Dec  7 04:00:39 np0005549474 kernel: smpboot: x86: Booting SMP configuration:
Dec  7 04:00:39 np0005549474 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  7 04:00:39 np0005549474 kernel: smp: Brought up 1 node, 8 CPUs
Dec  7 04:00:39 np0005549474 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  7 04:00:39 np0005549474 kernel: node 0 deferred pages initialised in 12ms
Dec  7 04:00:39 np0005549474 kernel: Memory: 7764032K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec  7 04:00:39 np0005549474 kernel: devtmpfs: initialized
Dec  7 04:00:39 np0005549474 kernel: x86/mm: Memory block size: 128MB
Dec  7 04:00:39 np0005549474 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  7 04:00:39 np0005549474 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  7 04:00:39 np0005549474 kernel: pinctrl core: initialized pinctrl subsystem
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  7 04:00:39 np0005549474 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  7 04:00:39 np0005549474 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  7 04:00:39 np0005549474 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  7 04:00:39 np0005549474 kernel: audit: initializing netlink subsys (disabled)
Dec  7 04:00:39 np0005549474 kernel: audit: type=2000 audit(1765098037.345:1): state=initialized audit_enabled=0 res=1
Dec  7 04:00:39 np0005549474 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  7 04:00:39 np0005549474 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  7 04:00:39 np0005549474 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  7 04:00:39 np0005549474 kernel: cpuidle: using governor menu
Dec  7 04:00:39 np0005549474 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  7 04:00:39 np0005549474 kernel: PCI: Using configuration type 1 for base access
Dec  7 04:00:39 np0005549474 kernel: PCI: Using configuration type 1 for extended access
Dec  7 04:00:39 np0005549474 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  7 04:00:39 np0005549474 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  7 04:00:39 np0005549474 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  7 04:00:39 np0005549474 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  7 04:00:39 np0005549474 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  7 04:00:39 np0005549474 kernel: Demotion targets for Node 0: null
Dec  7 04:00:39 np0005549474 kernel: cryptd: max_cpu_qlen set to 1000
Dec  7 04:00:39 np0005549474 kernel: ACPI: Added _OSI(Module Device)
Dec  7 04:00:39 np0005549474 kernel: ACPI: Added _OSI(Processor Device)
Dec  7 04:00:39 np0005549474 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  7 04:00:39 np0005549474 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  7 04:00:39 np0005549474 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  7 04:00:39 np0005549474 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  7 04:00:39 np0005549474 kernel: ACPI: Interpreter enabled
Dec  7 04:00:39 np0005549474 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  7 04:00:39 np0005549474 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  7 04:00:39 np0005549474 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  7 04:00:39 np0005549474 kernel: PCI: Using E820 reservations for host bridge windows
Dec  7 04:00:39 np0005549474 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  7 04:00:39 np0005549474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  7 04:00:39 np0005549474 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [3] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [4] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [5] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [6] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [7] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [8] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [9] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [10] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [11] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [12] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [13] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [14] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [15] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [16] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [17] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [18] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [19] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [20] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [21] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [22] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [23] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [24] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [25] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [26] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [27] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [28] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [29] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [30] registered
Dec  7 04:00:39 np0005549474 kernel: acpiphp: Slot [31] registered
Dec  7 04:00:39 np0005549474 kernel: PCI host bridge to bus 0000:00
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  7 04:00:39 np0005549474 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  7 04:00:39 np0005549474 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  7 04:00:39 np0005549474 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  7 04:00:39 np0005549474 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  7 04:00:39 np0005549474 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  7 04:00:39 np0005549474 kernel: iommu: Default domain type: Translated
Dec  7 04:00:39 np0005549474 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  7 04:00:39 np0005549474 kernel: SCSI subsystem initialized
Dec  7 04:00:39 np0005549474 kernel: ACPI: bus type USB registered
Dec  7 04:00:39 np0005549474 kernel: usbcore: registered new interface driver usbfs
Dec  7 04:00:39 np0005549474 kernel: usbcore: registered new interface driver hub
Dec  7 04:00:39 np0005549474 kernel: usbcore: registered new device driver usb
Dec  7 04:00:39 np0005549474 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  7 04:00:39 np0005549474 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  7 04:00:39 np0005549474 kernel: PTP clock support registered
Dec  7 04:00:39 np0005549474 kernel: EDAC MC: Ver: 3.0.0
Dec  7 04:00:39 np0005549474 kernel: NetLabel: Initializing
Dec  7 04:00:39 np0005549474 kernel: NetLabel:  domain hash size = 128
Dec  7 04:00:39 np0005549474 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  7 04:00:39 np0005549474 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  7 04:00:39 np0005549474 kernel: PCI: Using ACPI for IRQ routing
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  7 04:00:39 np0005549474 kernel: vgaarb: loaded
Dec  7 04:00:39 np0005549474 kernel: clocksource: Switched to clocksource kvm-clock
Dec  7 04:00:39 np0005549474 kernel: VFS: Disk quotas dquot_6.6.0
Dec  7 04:00:39 np0005549474 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  7 04:00:39 np0005549474 kernel: pnp: PnP ACPI init
Dec  7 04:00:39 np0005549474 kernel: pnp: PnP ACPI: found 5 devices
Dec  7 04:00:39 np0005549474 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_INET protocol family
Dec  7 04:00:39 np0005549474 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  7 04:00:39 np0005549474 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_XDP protocol family
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  7 04:00:39 np0005549474 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  7 04:00:39 np0005549474 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  7 04:00:39 np0005549474 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 78010 usecs
Dec  7 04:00:39 np0005549474 kernel: PCI: CLS 0 bytes, default 64
Dec  7 04:00:39 np0005549474 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  7 04:00:39 np0005549474 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  7 04:00:39 np0005549474 kernel: Trying to unpack rootfs image as initramfs...
Dec  7 04:00:39 np0005549474 kernel: ACPI: bus type thunderbolt registered
Dec  7 04:00:39 np0005549474 kernel: Initialise system trusted keyrings
Dec  7 04:00:39 np0005549474 kernel: Key type blacklist registered
Dec  7 04:00:39 np0005549474 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  7 04:00:39 np0005549474 kernel: zbud: loaded
Dec  7 04:00:39 np0005549474 kernel: integrity: Platform Keyring initialized
Dec  7 04:00:39 np0005549474 kernel: integrity: Machine keyring initialized
Dec  7 04:00:39 np0005549474 kernel: Freeing initrd memory: 87804K
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_ALG protocol family
Dec  7 04:00:39 np0005549474 kernel: xor: automatically using best checksumming function   avx       
Dec  7 04:00:39 np0005549474 kernel: Key type asymmetric registered
Dec  7 04:00:39 np0005549474 kernel: Asymmetric key parser 'x509' registered
Dec  7 04:00:39 np0005549474 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  7 04:00:39 np0005549474 kernel: io scheduler mq-deadline registered
Dec  7 04:00:39 np0005549474 kernel: io scheduler kyber registered
Dec  7 04:00:39 np0005549474 kernel: io scheduler bfq registered
Dec  7 04:00:39 np0005549474 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  7 04:00:39 np0005549474 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  7 04:00:39 np0005549474 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  7 04:00:39 np0005549474 kernel: ACPI: button: Power Button [PWRF]
Dec  7 04:00:39 np0005549474 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  7 04:00:39 np0005549474 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  7 04:00:39 np0005549474 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  7 04:00:39 np0005549474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  7 04:00:39 np0005549474 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  7 04:00:39 np0005549474 kernel: Non-volatile memory driver v1.3
Dec  7 04:00:39 np0005549474 kernel: rdac: device handler registered
Dec  7 04:00:39 np0005549474 kernel: hp_sw: device handler registered
Dec  7 04:00:39 np0005549474 kernel: emc: device handler registered
Dec  7 04:00:39 np0005549474 kernel: alua: device handler registered
Dec  7 04:00:39 np0005549474 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  7 04:00:39 np0005549474 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  7 04:00:39 np0005549474 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  7 04:00:39 np0005549474 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  7 04:00:39 np0005549474 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  7 04:00:39 np0005549474 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  7 04:00:39 np0005549474 kernel: usb usb1: Product: UHCI Host Controller
Dec  7 04:00:39 np0005549474 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  7 04:00:39 np0005549474 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  7 04:00:39 np0005549474 kernel: hub 1-0:1.0: USB hub found
Dec  7 04:00:39 np0005549474 kernel: hub 1-0:1.0: 2 ports detected
Dec  7 04:00:39 np0005549474 kernel: usbcore: registered new interface driver usbserial_generic
Dec  7 04:00:39 np0005549474 kernel: usbserial: USB Serial support registered for generic
Dec  7 04:00:39 np0005549474 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  7 04:00:39 np0005549474 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  7 04:00:39 np0005549474 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  7 04:00:39 np0005549474 kernel: mousedev: PS/2 mouse device common for all mice
Dec  7 04:00:39 np0005549474 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  7 04:00:39 np0005549474 kernel: rtc_cmos 00:04: registered as rtc0
Dec  7 04:00:39 np0005549474 kernel: rtc_cmos 00:04: setting system clock to 2025-12-07T09:00:38 UTC (1765098038)
Dec  7 04:00:39 np0005549474 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  7 04:00:39 np0005549474 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  7 04:00:39 np0005549474 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  7 04:00:39 np0005549474 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  7 04:00:39 np0005549474 kernel: usbcore: registered new interface driver usbhid
Dec  7 04:00:39 np0005549474 kernel: usbhid: USB HID core driver
Dec  7 04:00:39 np0005549474 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  7 04:00:39 np0005549474 kernel: drop_monitor: Initializing network drop monitor service
Dec  7 04:00:39 np0005549474 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  7 04:00:39 np0005549474 kernel: Initializing XFRM netlink socket
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_INET6 protocol family
Dec  7 04:00:39 np0005549474 kernel: Segment Routing with IPv6
Dec  7 04:00:39 np0005549474 kernel: NET: Registered PF_PACKET protocol family
Dec  7 04:00:39 np0005549474 kernel: mpls_gso: MPLS GSO support
Dec  7 04:00:39 np0005549474 kernel: IPI shorthand broadcast: enabled
Dec  7 04:00:39 np0005549474 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  7 04:00:39 np0005549474 kernel: AES CTR mode by8 optimization enabled
Dec  7 04:00:39 np0005549474 kernel: sched_clock: Marking stable (1252003661, 150696494)->(1492906483, -90206328)
Dec  7 04:00:39 np0005549474 kernel: registered taskstats version 1
Dec  7 04:00:39 np0005549474 kernel: Loading compiled-in X.509 certificates
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  7 04:00:39 np0005549474 kernel: Demotion targets for Node 0: null
Dec  7 04:00:39 np0005549474 kernel: page_owner is disabled
Dec  7 04:00:39 np0005549474 kernel: Key type .fscrypt registered
Dec  7 04:00:39 np0005549474 kernel: Key type fscrypt-provisioning registered
Dec  7 04:00:39 np0005549474 kernel: Key type big_key registered
Dec  7 04:00:39 np0005549474 kernel: Key type encrypted registered
Dec  7 04:00:39 np0005549474 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  7 04:00:39 np0005549474 kernel: Loading compiled-in module X.509 certificates
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  7 04:00:39 np0005549474 kernel: ima: Allocated hash algorithm: sha256
Dec  7 04:00:39 np0005549474 kernel: ima: No architecture policies found
Dec  7 04:00:39 np0005549474 kernel: evm: Initialising EVM extended attributes:
Dec  7 04:00:39 np0005549474 kernel: evm: security.selinux
Dec  7 04:00:39 np0005549474 kernel: evm: security.SMACK64 (disabled)
Dec  7 04:00:39 np0005549474 kernel: evm: security.SMACK64EXEC (disabled)
Dec  7 04:00:39 np0005549474 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  7 04:00:39 np0005549474 kernel: evm: security.SMACK64MMAP (disabled)
Dec  7 04:00:39 np0005549474 kernel: evm: security.apparmor (disabled)
Dec  7 04:00:39 np0005549474 kernel: evm: security.ima
Dec  7 04:00:39 np0005549474 kernel: evm: security.capability
Dec  7 04:00:39 np0005549474 kernel: evm: HMAC attrs: 0x1
Dec  7 04:00:39 np0005549474 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  7 04:00:39 np0005549474 kernel: Running certificate verification RSA selftest
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  7 04:00:39 np0005549474 kernel: Running certificate verification ECDSA selftest
Dec  7 04:00:39 np0005549474 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  7 04:00:39 np0005549474 kernel: clk: Disabling unused clocks
Dec  7 04:00:39 np0005549474 kernel: Freeing unused decrypted memory: 2028K
Dec  7 04:00:39 np0005549474 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  7 04:00:39 np0005549474 kernel: Write protecting the kernel read-only data: 30720k
Dec  7 04:00:39 np0005549474 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  7 04:00:39 np0005549474 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  7 04:00:39 np0005549474 kernel: Run /init as init process
Dec  7 04:00:39 np0005549474 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  7 04:00:39 np0005549474 systemd: Detected virtualization kvm.
Dec  7 04:00:39 np0005549474 systemd: Detected architecture x86-64.
Dec  7 04:00:39 np0005549474 systemd: Running in initrd.
Dec  7 04:00:39 np0005549474 systemd: No hostname configured, using default hostname.
Dec  7 04:00:39 np0005549474 systemd: Hostname set to <localhost>.
Dec  7 04:00:39 np0005549474 systemd: Initializing machine ID from VM UUID.
Dec  7 04:00:39 np0005549474 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  7 04:00:39 np0005549474 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  7 04:00:39 np0005549474 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  7 04:00:39 np0005549474 kernel: usb 1-1: Manufacturer: QEMU
Dec  7 04:00:39 np0005549474 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  7 04:00:39 np0005549474 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  7 04:00:39 np0005549474 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  7 04:00:39 np0005549474 systemd: Queued start job for default target Initrd Default Target.
Dec  7 04:00:39 np0005549474 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  7 04:00:39 np0005549474 systemd: Reached target Local Encrypted Volumes.
Dec  7 04:00:39 np0005549474 systemd: Reached target Initrd /usr File System.
Dec  7 04:00:39 np0005549474 systemd: Reached target Local File Systems.
Dec  7 04:00:39 np0005549474 systemd: Reached target Path Units.
Dec  7 04:00:39 np0005549474 systemd: Reached target Slice Units.
Dec  7 04:00:39 np0005549474 systemd: Reached target Swaps.
Dec  7 04:00:39 np0005549474 systemd: Reached target Timer Units.
Dec  7 04:00:39 np0005549474 systemd: Listening on D-Bus System Message Bus Socket.
Dec  7 04:00:39 np0005549474 systemd: Listening on Journal Socket (/dev/log).
Dec  7 04:00:39 np0005549474 systemd: Listening on Journal Socket.
Dec  7 04:00:39 np0005549474 systemd: Listening on udev Control Socket.
Dec  7 04:00:39 np0005549474 systemd: Listening on udev Kernel Socket.
Dec  7 04:00:39 np0005549474 systemd: Reached target Socket Units.
Dec  7 04:00:39 np0005549474 systemd: Starting Create List of Static Device Nodes...
Dec  7 04:00:39 np0005549474 systemd: Starting Journal Service...
Dec  7 04:00:39 np0005549474 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  7 04:00:39 np0005549474 systemd: Starting Apply Kernel Variables...
Dec  7 04:00:39 np0005549474 systemd: Starting Create System Users...
Dec  7 04:00:39 np0005549474 systemd: Starting Setup Virtual Console...
Dec  7 04:00:39 np0005549474 systemd: Finished Create List of Static Device Nodes.
Dec  7 04:00:39 np0005549474 systemd: Finished Apply Kernel Variables.
Dec  7 04:00:39 np0005549474 systemd: Finished Create System Users.
Dec  7 04:00:39 np0005549474 systemd-journald[308]: Journal started
Dec  7 04:00:39 np0005549474 systemd-journald[308]: Runtime Journal (/run/log/journal/21fd7ebb512a4f779836e8bca79e9734) is 8.0M, max 153.6M, 145.6M free.
Dec  7 04:00:39 np0005549474 systemd-sysusers[312]: Creating group 'users' with GID 100.
Dec  7 04:00:39 np0005549474 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Dec  7 04:00:39 np0005549474 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  7 04:00:39 np0005549474 systemd: Started Journal Service.
Dec  7 04:00:39 np0005549474 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  7 04:00:39 np0005549474 systemd[1]: Starting Create Volatile Files and Directories...
Dec  7 04:00:39 np0005549474 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  7 04:00:39 np0005549474 systemd[1]: Finished Create Volatile Files and Directories.
Dec  7 04:00:39 np0005549474 systemd[1]: Finished Setup Virtual Console.
Dec  7 04:00:39 np0005549474 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  7 04:00:39 np0005549474 systemd[1]: Starting dracut cmdline hook...
Dec  7 04:00:39 np0005549474 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec  7 04:00:39 np0005549474 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  7 04:00:39 np0005549474 systemd[1]: Finished dracut cmdline hook.
Dec  7 04:00:39 np0005549474 systemd[1]: Starting dracut pre-udev hook...
Dec  7 04:00:39 np0005549474 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  7 04:00:39 np0005549474 kernel: device-mapper: uevent: version 1.0.3
Dec  7 04:00:39 np0005549474 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  7 04:00:39 np0005549474 kernel: RPC: Registered named UNIX socket transport module.
Dec  7 04:00:39 np0005549474 kernel: RPC: Registered udp transport module.
Dec  7 04:00:39 np0005549474 kernel: RPC: Registered tcp transport module.
Dec  7 04:00:39 np0005549474 kernel: RPC: Registered tcp-with-tls transport module.
Dec  7 04:00:39 np0005549474 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  7 04:00:39 np0005549474 rpc.statd[445]: Version 2.5.4 starting
Dec  7 04:00:39 np0005549474 rpc.statd[445]: Initializing NSM state
Dec  7 04:00:39 np0005549474 rpc.idmapd[450]: Setting log level to 0
Dec  7 04:00:39 np0005549474 systemd[1]: Finished dracut pre-udev hook.
Dec  7 04:00:39 np0005549474 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  7 04:00:39 np0005549474 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Dec  7 04:00:39 np0005549474 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  7 04:00:39 np0005549474 systemd[1]: Starting dracut pre-trigger hook...
Dec  7 04:00:39 np0005549474 systemd[1]: Finished dracut pre-trigger hook.
Dec  7 04:00:39 np0005549474 systemd[1]: Starting Coldplug All udev Devices...
Dec  7 04:00:40 np0005549474 systemd[1]: Created slice Slice /system/modprobe.
Dec  7 04:00:40 np0005549474 systemd[1]: Starting Load Kernel Module configfs...
Dec  7 04:00:40 np0005549474 systemd[1]: Finished Coldplug All udev Devices.
Dec  7 04:00:40 np0005549474 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  7 04:00:40 np0005549474 systemd[1]: Finished Load Kernel Module configfs.
Dec  7 04:00:40 np0005549474 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target Network.
Dec  7 04:00:40 np0005549474 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  7 04:00:40 np0005549474 systemd[1]: Starting dracut initqueue hook...
Dec  7 04:00:40 np0005549474 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  7 04:00:40 np0005549474 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  7 04:00:40 np0005549474 kernel: vda: vda1
Dec  7 04:00:40 np0005549474 kernel: scsi host0: ata_piix
Dec  7 04:00:40 np0005549474 kernel: scsi host1: ata_piix
Dec  7 04:00:40 np0005549474 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  7 04:00:40 np0005549474 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  7 04:00:40 np0005549474 systemd[1]: Mounting Kernel Configuration File System...
Dec  7 04:00:40 np0005549474 systemd[1]: Mounted Kernel Configuration File System.
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target System Initialization.
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target Basic System.
Dec  7 04:00:40 np0005549474 kernel: ata1: found unknown device (class 0)
Dec  7 04:00:40 np0005549474 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  7 04:00:40 np0005549474 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  7 04:00:40 np0005549474 systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:00:40 np0005549474 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  7 04:00:40 np0005549474 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  7 04:00:40 np0005549474 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  7 04:00:40 np0005549474 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target Initrd Root Device.
Dec  7 04:00:40 np0005549474 systemd[1]: Finished dracut initqueue hook.
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  7 04:00:40 np0005549474 systemd[1]: Reached target Remote File Systems.
Dec  7 04:00:40 np0005549474 systemd[1]: Starting dracut pre-mount hook...
Dec  7 04:00:40 np0005549474 systemd[1]: Finished dracut pre-mount hook.
Dec  7 04:00:40 np0005549474 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  7 04:00:40 np0005549474 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Dec  7 04:00:40 np0005549474 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  7 04:00:40 np0005549474 systemd[1]: Mounting /sysroot...
Dec  7 04:00:41 np0005549474 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  7 04:00:41 np0005549474 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  7 04:00:41 np0005549474 kernel: XFS (vda1): Ending clean mount
Dec  7 04:00:41 np0005549474 systemd[1]: Mounted /sysroot.
Dec  7 04:00:41 np0005549474 systemd[1]: Reached target Initrd Root File System.
Dec  7 04:00:41 np0005549474 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  7 04:00:41 np0005549474 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  7 04:00:41 np0005549474 systemd[1]: Reached target Initrd File Systems.
Dec  7 04:00:41 np0005549474 systemd[1]: Reached target Initrd Default Target.
Dec  7 04:00:41 np0005549474 systemd[1]: Starting dracut mount hook...
Dec  7 04:00:41 np0005549474 systemd[1]: Finished dracut mount hook.
Dec  7 04:00:41 np0005549474 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  7 04:00:41 np0005549474 rpc.idmapd[450]: exiting on signal 15
Dec  7 04:00:41 np0005549474 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  7 04:00:41 np0005549474 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Network.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Timer Units.
Dec  7 04:00:41 np0005549474 systemd[1]: dbus.socket: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Initrd Default Target.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Basic System.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Initrd Root Device.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Initrd /usr File System.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Path Units.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Remote File Systems.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Slice Units.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Socket Units.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target System Initialization.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Local File Systems.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Swaps.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut mount hook.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut pre-mount hook.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut initqueue hook.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Apply Kernel Variables.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Coldplug All udev Devices.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut pre-trigger hook.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Setup Virtual Console.
Dec  7 04:00:41 np0005549474 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-udevd.service: Consumed 1.019s CPU time.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Closed udev Control Socket.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Closed udev Kernel Socket.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut pre-udev hook.
Dec  7 04:00:41 np0005549474 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped dracut cmdline hook.
Dec  7 04:00:41 np0005549474 systemd[1]: Starting Cleanup udev Database...
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  7 04:00:41 np0005549474 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  7 04:00:41 np0005549474 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Stopped Create System Users.
Dec  7 04:00:41 np0005549474 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  7 04:00:41 np0005549474 systemd[1]: Finished Cleanup udev Database.
Dec  7 04:00:41 np0005549474 systemd[1]: Reached target Switch Root.
Dec  7 04:00:41 np0005549474 systemd[1]: Starting Switch Root...
Dec  7 04:00:41 np0005549474 systemd[1]: Switching root.
Dec  7 04:00:41 np0005549474 systemd-journald[308]: Journal stopped
Dec  7 04:00:42 np0005549474 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  7 04:00:42 np0005549474 kernel: audit: type=1404 audit(1765098041.640:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:00:42 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:00:42 np0005549474 kernel: audit: type=1403 audit(1765098041.772:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  7 04:00:42 np0005549474 systemd: Successfully loaded SELinux policy in 134.979ms.
Dec  7 04:00:42 np0005549474 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.629ms.
Dec  7 04:00:42 np0005549474 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  7 04:00:42 np0005549474 systemd: Detected virtualization kvm.
Dec  7 04:00:42 np0005549474 systemd: Detected architecture x86-64.
Dec  7 04:00:42 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:00:42 np0005549474 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 systemd: Stopped Switch Root.
Dec  7 04:00:42 np0005549474 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  7 04:00:42 np0005549474 systemd: Created slice Slice /system/getty.
Dec  7 04:00:42 np0005549474 systemd: Created slice Slice /system/serial-getty.
Dec  7 04:00:42 np0005549474 systemd: Created slice Slice /system/sshd-keygen.
Dec  7 04:00:42 np0005549474 systemd: Created slice User and Session Slice.
Dec  7 04:00:42 np0005549474 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  7 04:00:42 np0005549474 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  7 04:00:42 np0005549474 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  7 04:00:42 np0005549474 systemd: Reached target Local Encrypted Volumes.
Dec  7 04:00:42 np0005549474 systemd: Stopped target Switch Root.
Dec  7 04:00:42 np0005549474 systemd: Stopped target Initrd File Systems.
Dec  7 04:00:42 np0005549474 systemd: Stopped target Initrd Root File System.
Dec  7 04:00:42 np0005549474 systemd: Reached target Local Integrity Protected Volumes.
Dec  7 04:00:42 np0005549474 systemd: Reached target Path Units.
Dec  7 04:00:42 np0005549474 systemd: Reached target rpc_pipefs.target.
Dec  7 04:00:42 np0005549474 systemd: Reached target Slice Units.
Dec  7 04:00:42 np0005549474 systemd: Reached target Swaps.
Dec  7 04:00:42 np0005549474 systemd: Reached target Local Verity Protected Volumes.
Dec  7 04:00:42 np0005549474 systemd: Listening on RPCbind Server Activation Socket.
Dec  7 04:00:42 np0005549474 systemd: Reached target RPC Port Mapper.
Dec  7 04:00:42 np0005549474 systemd: Listening on Process Core Dump Socket.
Dec  7 04:00:42 np0005549474 systemd: Listening on initctl Compatibility Named Pipe.
Dec  7 04:00:42 np0005549474 systemd: Listening on udev Control Socket.
Dec  7 04:00:42 np0005549474 systemd: Listening on udev Kernel Socket.
Dec  7 04:00:42 np0005549474 systemd: Mounting Huge Pages File System...
Dec  7 04:00:42 np0005549474 systemd: Mounting POSIX Message Queue File System...
Dec  7 04:00:42 np0005549474 systemd: Mounting Kernel Debug File System...
Dec  7 04:00:42 np0005549474 systemd: Mounting Kernel Trace File System...
Dec  7 04:00:42 np0005549474 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  7 04:00:42 np0005549474 systemd: Starting Create List of Static Device Nodes...
Dec  7 04:00:42 np0005549474 systemd: Starting Load Kernel Module configfs...
Dec  7 04:00:42 np0005549474 systemd: Starting Load Kernel Module drm...
Dec  7 04:00:42 np0005549474 systemd: Starting Load Kernel Module efi_pstore...
Dec  7 04:00:42 np0005549474 systemd: Starting Load Kernel Module fuse...
Dec  7 04:00:42 np0005549474 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  7 04:00:42 np0005549474 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 systemd: Stopped File System Check on Root Device.
Dec  7 04:00:42 np0005549474 systemd: Stopped Journal Service.
Dec  7 04:00:42 np0005549474 kernel: fuse: init (API version 7.37)
Dec  7 04:00:42 np0005549474 systemd: Starting Journal Service...
Dec  7 04:00:42 np0005549474 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  7 04:00:42 np0005549474 systemd: Starting Generate network units from Kernel command line...
Dec  7 04:00:42 np0005549474 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  7 04:00:42 np0005549474 systemd: Starting Remount Root and Kernel File Systems...
Dec  7 04:00:42 np0005549474 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  7 04:00:42 np0005549474 systemd: Starting Apply Kernel Variables...
Dec  7 04:00:42 np0005549474 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  7 04:00:42 np0005549474 systemd: Starting Coldplug All udev Devices...
Dec  7 04:00:42 np0005549474 systemd-journald[679]: Journal started
Dec  7 04:00:42 np0005549474 systemd-journald[679]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  7 04:00:42 np0005549474 systemd[1]: Queued start job for default target Multi-User System.
Dec  7 04:00:42 np0005549474 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 kernel: ACPI: bus type drm_connector registered
Dec  7 04:00:42 np0005549474 systemd: Started Journal Service.
Dec  7 04:00:42 np0005549474 systemd[1]: Mounted Huge Pages File System.
Dec  7 04:00:42 np0005549474 systemd[1]: Mounted POSIX Message Queue File System.
Dec  7 04:00:42 np0005549474 systemd[1]: Mounted Kernel Debug File System.
Dec  7 04:00:42 np0005549474 systemd[1]: Mounted Kernel Trace File System.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Create List of Static Device Nodes.
Dec  7 04:00:42 np0005549474 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Load Kernel Module configfs.
Dec  7 04:00:42 np0005549474 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Load Kernel Module drm.
Dec  7 04:00:42 np0005549474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  7 04:00:42 np0005549474 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Load Kernel Module fuse.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Generate network units from Kernel command line.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Apply Kernel Variables.
Dec  7 04:00:42 np0005549474 systemd[1]: Mounting FUSE Control File System...
Dec  7 04:00:42 np0005549474 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Rebuild Hardware Database...
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  7 04:00:42 np0005549474 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Load/Save OS Random Seed...
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Create System Users...
Dec  7 04:00:42 np0005549474 systemd-journald[679]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  7 04:00:42 np0005549474 systemd-journald[679]: Received client request to flush runtime journal.
Dec  7 04:00:42 np0005549474 systemd[1]: Mounted FUSE Control File System.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Load/Save OS Random Seed.
Dec  7 04:00:42 np0005549474 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Create System Users.
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Coldplug All udev Devices.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  7 04:00:42 np0005549474 systemd[1]: Reached target Preparation for Local File Systems.
Dec  7 04:00:42 np0005549474 systemd[1]: Reached target Local File Systems.
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  7 04:00:42 np0005549474 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  7 04:00:42 np0005549474 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  7 04:00:42 np0005549474 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Automatic Boot Loader Update...
Dec  7 04:00:42 np0005549474 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Create Volatile Files and Directories...
Dec  7 04:00:42 np0005549474 bootctl[699]: Couldn't find EFI system partition, skipping.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Automatic Boot Loader Update.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Create Volatile Files and Directories.
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Security Auditing Service...
Dec  7 04:00:42 np0005549474 systemd[1]: Starting RPC Bind...
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Rebuild Journal Catalog...
Dec  7 04:00:42 np0005549474 auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  7 04:00:42 np0005549474 auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Rebuild Journal Catalog.
Dec  7 04:00:42 np0005549474 augenrules[711]: /sbin/augenrules: No change
Dec  7 04:00:42 np0005549474 systemd[1]: Started RPC Bind.
Dec  7 04:00:42 np0005549474 augenrules[726]: No rules
Dec  7 04:00:42 np0005549474 augenrules[726]: enabled 1
Dec  7 04:00:42 np0005549474 augenrules[726]: failure 1
Dec  7 04:00:42 np0005549474 augenrules[726]: pid 706
Dec  7 04:00:42 np0005549474 augenrules[726]: rate_limit 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_limit 8192
Dec  7 04:00:42 np0005549474 augenrules[726]: lost 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog 3
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_wait_time 60000
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_wait_time_actual 0
Dec  7 04:00:42 np0005549474 augenrules[726]: enabled 1
Dec  7 04:00:42 np0005549474 augenrules[726]: failure 1
Dec  7 04:00:42 np0005549474 augenrules[726]: pid 706
Dec  7 04:00:42 np0005549474 augenrules[726]: rate_limit 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_limit 8192
Dec  7 04:00:42 np0005549474 augenrules[726]: lost 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_wait_time 60000
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_wait_time_actual 0
Dec  7 04:00:42 np0005549474 augenrules[726]: enabled 1
Dec  7 04:00:42 np0005549474 augenrules[726]: failure 1
Dec  7 04:00:42 np0005549474 augenrules[726]: pid 706
Dec  7 04:00:42 np0005549474 augenrules[726]: rate_limit 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_limit 8192
Dec  7 04:00:42 np0005549474 augenrules[726]: lost 0
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog 3
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_wait_time 60000
Dec  7 04:00:42 np0005549474 augenrules[726]: backlog_wait_time_actual 0
Dec  7 04:00:42 np0005549474 systemd[1]: Started Security Auditing Service.
Dec  7 04:00:42 np0005549474 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  7 04:00:42 np0005549474 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  7 04:00:43 np0005549474 systemd[1]: Finished Rebuild Hardware Database.
Dec  7 04:00:43 np0005549474 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  7 04:00:43 np0005549474 systemd[1]: Starting Update is Completed...
Dec  7 04:00:43 np0005549474 systemd[1]: Finished Update is Completed.
Dec  7 04:00:43 np0005549474 systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Dec  7 04:00:43 np0005549474 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  7 04:00:43 np0005549474 systemd[1]: Reached target System Initialization.
Dec  7 04:00:43 np0005549474 systemd[1]: Started dnf makecache --timer.
Dec  7 04:00:43 np0005549474 systemd[1]: Started Daily rotation of log files.
Dec  7 04:00:43 np0005549474 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  7 04:00:43 np0005549474 systemd[1]: Reached target Timer Units.
Dec  7 04:00:43 np0005549474 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  7 04:00:43 np0005549474 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  7 04:00:43 np0005549474 systemd[1]: Reached target Socket Units.
Dec  7 04:00:43 np0005549474 systemd[1]: Starting D-Bus System Message Bus...
Dec  7 04:00:43 np0005549474 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  7 04:00:43 np0005549474 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  7 04:00:43 np0005549474 systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:00:43 np0005549474 systemd[1]: Starting Load Kernel Module configfs...
Dec  7 04:00:43 np0005549474 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  7 04:00:43 np0005549474 systemd[1]: Finished Load Kernel Module configfs.
Dec  7 04:00:43 np0005549474 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  7 04:00:43 np0005549474 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  7 04:00:43 np0005549474 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  7 04:00:43 np0005549474 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  7 04:00:43 np0005549474 systemd[1]: Started D-Bus System Message Bus.
Dec  7 04:00:43 np0005549474 systemd[1]: Reached target Basic System.
Dec  7 04:00:43 np0005549474 dbus-broker-lau[772]: Ready
Dec  7 04:00:43 np0005549474 systemd[1]: Starting NTP client/server...
Dec  7 04:00:43 np0005549474 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  7 04:00:43 np0005549474 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  7 04:00:43 np0005549474 systemd[1]: Starting IPv4 firewall with iptables...
Dec  7 04:00:43 np0005549474 systemd[1]: Started irqbalance daemon.
Dec  7 04:00:43 np0005549474 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  7 04:00:43 np0005549474 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 04:00:43 np0005549474 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 04:00:43 np0005549474 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 04:00:43 np0005549474 systemd[1]: Reached target sshd-keygen.target.
Dec  7 04:00:43 np0005549474 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  7 04:00:43 np0005549474 systemd[1]: Reached target User and Group Name Lookups.
Dec  7 04:00:43 np0005549474 systemd[1]: Starting User Login Management...
Dec  7 04:00:43 np0005549474 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  7 04:00:43 np0005549474 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  7 04:00:43 np0005549474 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  7 04:00:43 np0005549474 chronyd[805]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  7 04:00:43 np0005549474 chronyd[805]: Loaded 0 symmetric keys
Dec  7 04:00:43 np0005549474 chronyd[805]: Using right/UTC timezone to obtain leap second data
Dec  7 04:00:43 np0005549474 chronyd[805]: Loaded seccomp filter (level 2)
Dec  7 04:00:43 np0005549474 systemd[1]: Started NTP client/server.
Dec  7 04:00:43 np0005549474 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  7 04:00:43 np0005549474 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  7 04:00:43 np0005549474 kernel: kvm_amd: TSC scaling supported
Dec  7 04:00:43 np0005549474 kernel: kvm_amd: Nested Virtualization enabled
Dec  7 04:00:43 np0005549474 kernel: kvm_amd: Nested Paging enabled
Dec  7 04:00:43 np0005549474 kernel: kvm_amd: LBR virtualization supported
Dec  7 04:00:43 np0005549474 systemd-logind[796]: New seat seat0.
Dec  7 04:00:43 np0005549474 kernel: Console: switching to colour dummy device 80x25
Dec  7 04:00:43 np0005549474 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  7 04:00:43 np0005549474 kernel: [drm] features: -context_init
Dec  7 04:00:43 np0005549474 kernel: [drm] number of scanouts: 1
Dec  7 04:00:43 np0005549474 kernel: [drm] number of cap sets: 0
Dec  7 04:00:43 np0005549474 systemd[1]: Started User Login Management.
Dec  7 04:00:43 np0005549474 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  7 04:00:43 np0005549474 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  7 04:00:43 np0005549474 kernel: Console: switching to colour frame buffer device 128x48
Dec  7 04:00:43 np0005549474 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  7 04:00:43 np0005549474 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  7 04:00:43 np0005549474 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  7 04:00:43 np0005549474 iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Dec  7 04:00:43 np0005549474 systemd[1]: Finished IPv4 firewall with iptables.
Dec  7 04:00:43 np0005549474 cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 07 Dec 2025 09:00:43 +0000. Up 6.54 seconds.
Dec  7 04:00:44 np0005549474 systemd[1]: run-cloud\x2dinit-tmp-tmphwuvn04p.mount: Deactivated successfully.
Dec  7 04:00:44 np0005549474 systemd[1]: Starting Hostname Service...
Dec  7 04:00:44 np0005549474 systemd[1]: Started Hostname Service.
Dec  7 04:00:44 np0005549474 systemd-hostnamed[856]: Hostname set to <np0005549474.novalocal> (static)
Dec  7 04:00:44 np0005549474 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  7 04:00:44 np0005549474 systemd[1]: Reached target Preparation for Network.
Dec  7 04:00:44 np0005549474 systemd[1]: Starting Network Manager...
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3603] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4452dece-8eac-4524-b110-088a9e058714)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3608] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3684] manager[0x55a357ae9080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3719] hostname: hostname: using hostnamed
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3720] hostname: static hostname changed from (none) to "np0005549474.novalocal"
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3724] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3902] manager[0x55a357ae9080]: rfkill: Wi-Fi hardware radio set enabled
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3903] manager[0x55a357ae9080]: rfkill: WWAN hardware radio set enabled
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3940] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3941] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3941] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3942] manager: Networking is enabled by state file
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3943] settings: Loaded settings plugin: keyfile (internal)
Dec  7 04:00:44 np0005549474 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3952] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3969] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3983] dhcp: init: Using DHCP client 'internal'
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.3985] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4001] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4009] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4017] device (lo): Activation: starting connection 'lo' (95a1d56c-e265-4e9f-bb61-bafa31bf60dd)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4028] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4032] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4056] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4061] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4064] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4066] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4068] device (eth0): carrier: link connected
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4074] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4080] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4086] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4091] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4092] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4095] manager: NetworkManager state is now CONNECTING
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4096] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4104] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4107] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:00:44 np0005549474 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 04:00:44 np0005549474 systemd[1]: Started Network Manager.
Dec  7 04:00:44 np0005549474 systemd[1]: Reached target Network.
Dec  7 04:00:44 np0005549474 systemd[1]: Starting Network Manager Wait Online...
Dec  7 04:00:44 np0005549474 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  7 04:00:44 np0005549474 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4298] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4302] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  7 04:00:44 np0005549474 NetworkManager[860]: <info>  [1765098044.4308] device (lo): Activation: successful, device activated.
Dec  7 04:00:44 np0005549474 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  7 04:00:44 np0005549474 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  7 04:00:44 np0005549474 systemd[1]: Reached target NFS client services.
Dec  7 04:00:44 np0005549474 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  7 04:00:44 np0005549474 systemd[1]: Reached target Remote File Systems.
Dec  7 04:00:44 np0005549474 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4548] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4558] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4580] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4630] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4632] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4634] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4638] device (eth0): Activation: successful, device activated.
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4646] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  7 04:00:47 np0005549474 NetworkManager[860]: <info>  [1765098047.4655] manager: startup complete
Dec  7 04:00:47 np0005549474 systemd[1]: Finished Network Manager Wait Online.
Dec  7 04:00:47 np0005549474 systemd[1]: Starting Cloud-init: Network Stage...
Dec  7 04:00:47 np0005549474 cloud-init[925]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 07 Dec 2025 09:00:47 +0000. Up 10.51 seconds.
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |  eth0  | True |         38.102.83.70         | 255.255.255.0 | global | fa:16:3e:6c:40:ef |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |  eth0  | True | fe80::f816:3eff:fe6c:40ef/64 |       .       |  link  | fa:16:3e:6c:40:ef |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec  7 04:00:47 np0005549474 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  7 04:00:48 np0005549474 cloud-init[925]: Generating public/private rsa key pair.
Dec  7 04:00:48 np0005549474 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  7 04:00:48 np0005549474 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  7 04:00:48 np0005549474 cloud-init[925]: The key fingerprint is:
Dec  7 04:00:48 np0005549474 cloud-init[925]: SHA256:ZYorKQn6buNcV3MlitxrbG6CQgK8XC0u8EpQANmvF44 root@np0005549474.novalocal
Dec  7 04:00:48 np0005549474 cloud-init[925]: The key's randomart image is:
Dec  7 04:00:48 np0005549474 cloud-init[925]: +---[RSA 3072]----+
Dec  7 04:00:48 np0005549474 cloud-init[925]: |++               |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |. o              |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |.. ..     + .    |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |+. ooo + = o     |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |*.++..+ S .      |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |o*Eoo. + +       |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |o++.+.o =        |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |.o+o.o.+.        |
Dec  7 04:00:48 np0005549474 cloud-init[925]: | +=o   o.        |
Dec  7 04:00:48 np0005549474 cloud-init[925]: +----[SHA256]-----+
Dec  7 04:00:48 np0005549474 cloud-init[925]: Generating public/private ecdsa key pair.
Dec  7 04:00:48 np0005549474 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  7 04:00:48 np0005549474 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  7 04:00:48 np0005549474 cloud-init[925]: The key fingerprint is:
Dec  7 04:00:48 np0005549474 cloud-init[925]: SHA256:AbjH48qHDcRKsqAqm7SxTiIjTiQjt27cs1hVIoGuj+g root@np0005549474.novalocal
Dec  7 04:00:48 np0005549474 cloud-init[925]: The key's randomart image is:
Dec  7 04:00:48 np0005549474 cloud-init[925]: +---[ECDSA 256]---+
Dec  7 04:00:48 np0005549474 cloud-init[925]: |   ....          |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |  . .. .         |
Dec  7 04:00:48 np0005549474 cloud-init[925]: | . ..o. o        |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |o o +.+o .       |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |**.o o..S        |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |Bo.....          |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |BO.o.=           |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |&+Bo* o          |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |*Eo .+           |
Dec  7 04:00:48 np0005549474 cloud-init[925]: +----[SHA256]-----+
Dec  7 04:00:48 np0005549474 cloud-init[925]: Generating public/private ed25519 key pair.
Dec  7 04:00:48 np0005549474 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  7 04:00:48 np0005549474 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  7 04:00:48 np0005549474 cloud-init[925]: The key fingerprint is:
Dec  7 04:00:48 np0005549474 cloud-init[925]: SHA256:R98W4QqFLK4+LlancqSjv7ndu4P+8GTgk09e6dkQdpY root@np0005549474.novalocal
Dec  7 04:00:48 np0005549474 cloud-init[925]: The key's randomart image is:
Dec  7 04:00:48 np0005549474 cloud-init[925]: +--[ED25519 256]--+
Dec  7 04:00:48 np0005549474 cloud-init[925]: |         . .. .  |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |        . o. . . |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |       . .o   o  |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |        .. o + . |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |      ..S + E o  |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |     .+o.o = .   |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |     ==++ +      |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |    *o*@.o +     |
Dec  7 04:00:48 np0005549474 cloud-init[925]: |  .+=X+oO+o .    |
Dec  7 04:00:48 np0005549474 cloud-init[925]: +----[SHA256]-----+
Dec  7 04:00:49 np0005549474 systemd[1]: Finished Cloud-init: Network Stage.
Dec  7 04:00:49 np0005549474 systemd[1]: Reached target Cloud-config availability.
Dec  7 04:00:49 np0005549474 systemd[1]: Reached target Network is Online.
Dec  7 04:00:49 np0005549474 systemd[1]: Starting Cloud-init: Config Stage...
Dec  7 04:00:49 np0005549474 systemd[1]: Starting Crash recovery kernel arming...
Dec  7 04:00:49 np0005549474 systemd[1]: Starting Notify NFS peers of a restart...
Dec  7 04:00:49 np0005549474 systemd[1]: Starting System Logging Service...
Dec  7 04:00:49 np0005549474 sm-notify[1009]: Version 2.5.4 starting
Dec  7 04:00:49 np0005549474 systemd[1]: Starting OpenSSH server daemon...
Dec  7 04:00:49 np0005549474 systemd[1]: Starting Permit User Sessions...
Dec  7 04:00:49 np0005549474 systemd[1]: Started Notify NFS peers of a restart.
Dec  7 04:00:49 np0005549474 systemd[1]: Finished Permit User Sessions.
Dec  7 04:00:49 np0005549474 systemd[1]: Started OpenSSH server daemon.
Dec  7 04:00:49 np0005549474 systemd[1]: Started Command Scheduler.
Dec  7 04:00:49 np0005549474 systemd[1]: Started Getty on tty1.
Dec  7 04:00:49 np0005549474 systemd[1]: Started Serial Getty on ttyS0.
Dec  7 04:00:49 np0005549474 systemd[1]: Reached target Login Prompts.
Dec  7 04:00:49 np0005549474 rsyslogd[1010]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1010" x-info="https://www.rsyslog.com"] start
Dec  7 04:00:49 np0005549474 systemd[1]: Started System Logging Service.
Dec  7 04:00:49 np0005549474 rsyslogd[1010]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  7 04:00:49 np0005549474 systemd[1]: Reached target Multi-User System.
Dec  7 04:00:49 np0005549474 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  7 04:00:49 np0005549474 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  7 04:00:49 np0005549474 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  7 04:00:49 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:00:49 np0005549474 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Dec  7 04:00:49 np0005549474 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  7 04:00:49 np0005549474 cloud-init[1121]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 07 Dec 2025 09:00:49 +0000. Up 12.05 seconds.
Dec  7 04:00:49 np0005549474 systemd[1]: Finished Cloud-init: Config Stage.
Dec  7 04:00:49 np0005549474 systemd[1]: Starting Cloud-init: Final Stage...
Dec  7 04:00:49 np0005549474 dracut[1290]: dracut-057-102.git20250818.el9
Dec  7 04:00:49 np0005549474 cloud-init[1294]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 07 Dec 2025 09:00:49 +0000. Up 12.46 seconds.
Dec  7 04:00:49 np0005549474 cloud-init[1308]: #############################################################
Dec  7 04:00:49 np0005549474 cloud-init[1309]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  7 04:00:49 np0005549474 cloud-init[1311]: 256 SHA256:AbjH48qHDcRKsqAqm7SxTiIjTiQjt27cs1hVIoGuj+g root@np0005549474.novalocal (ECDSA)
Dec  7 04:00:49 np0005549474 cloud-init[1313]: 256 SHA256:R98W4QqFLK4+LlancqSjv7ndu4P+8GTgk09e6dkQdpY root@np0005549474.novalocal (ED25519)
Dec  7 04:00:49 np0005549474 cloud-init[1315]: 3072 SHA256:ZYorKQn6buNcV3MlitxrbG6CQgK8XC0u8EpQANmvF44 root@np0005549474.novalocal (RSA)
Dec  7 04:00:49 np0005549474 cloud-init[1316]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  7 04:00:49 np0005549474 cloud-init[1317]: #############################################################
Dec  7 04:00:49 np0005549474 cloud-init[1294]: Cloud-init v. 24.4-7.el9 finished at Sun, 07 Dec 2025 09:00:49 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.65 seconds
Dec  7 04:00:49 np0005549474 dracut[1292]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  7 04:00:50 np0005549474 systemd[1]: Finished Cloud-init: Final Stage.
Dec  7 04:00:50 np0005549474 systemd[1]: Reached target Cloud-init target.
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  7 04:00:50 np0005549474 dracut[1292]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: memstrack is not available
Dec  7 04:00:51 np0005549474 dracut[1292]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  7 04:00:51 np0005549474 dracut[1292]: memstrack is not available
Dec  7 04:00:51 np0005549474 dracut[1292]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  7 04:00:51 np0005549474 dracut[1292]: *** Including module: systemd ***
Dec  7 04:00:52 np0005549474 dracut[1292]: *** Including module: fips ***
Dec  7 04:00:52 np0005549474 chronyd[805]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Dec  7 04:00:52 np0005549474 chronyd[805]: System clock TAI offset set to 37 seconds
Dec  7 04:00:52 np0005549474 dracut[1292]: *** Including module: systemd-initrd ***
Dec  7 04:00:52 np0005549474 dracut[1292]: *** Including module: i18n ***
Dec  7 04:00:52 np0005549474 dracut[1292]: *** Including module: drm ***
Dec  7 04:00:53 np0005549474 dracut[1292]: *** Including module: prefixdevname ***
Dec  7 04:00:53 np0005549474 dracut[1292]: *** Including module: kernel-modules ***
Dec  7 04:00:53 np0005549474 irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  7 04:00:53 np0005549474 irqbalance[789]: IRQ 25 affinity is now unmanaged
Dec  7 04:00:53 np0005549474 irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  7 04:00:53 np0005549474 irqbalance[789]: IRQ 31 affinity is now unmanaged
Dec  7 04:00:53 np0005549474 irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  7 04:00:53 np0005549474 irqbalance[789]: IRQ 28 affinity is now unmanaged
Dec  7 04:00:53 np0005549474 irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  7 04:00:53 np0005549474 irqbalance[789]: IRQ 32 affinity is now unmanaged
Dec  7 04:00:53 np0005549474 irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  7 04:00:53 np0005549474 irqbalance[789]: IRQ 30 affinity is now unmanaged
Dec  7 04:00:53 np0005549474 irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  7 04:00:53 np0005549474 irqbalance[789]: IRQ 29 affinity is now unmanaged
Dec  7 04:00:53 np0005549474 kernel: block vda: the capability attribute has been deprecated.
Dec  7 04:00:53 np0005549474 dracut[1292]: *** Including module: kernel-modules-extra ***
Dec  7 04:00:53 np0005549474 dracut[1292]: *** Including module: qemu ***
Dec  7 04:00:54 np0005549474 dracut[1292]: *** Including module: fstab-sys ***
Dec  7 04:00:54 np0005549474 dracut[1292]: *** Including module: rootfs-block ***
Dec  7 04:00:54 np0005549474 dracut[1292]: *** Including module: terminfo ***
Dec  7 04:00:54 np0005549474 dracut[1292]: *** Including module: udev-rules ***
Dec  7 04:00:54 np0005549474 dracut[1292]: Skipping udev rule: 91-permissions.rules
Dec  7 04:00:54 np0005549474 dracut[1292]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  7 04:00:54 np0005549474 dracut[1292]: *** Including module: virtiofs ***
Dec  7 04:00:54 np0005549474 dracut[1292]: *** Including module: dracut-systemd ***
Dec  7 04:00:55 np0005549474 dracut[1292]: *** Including module: usrmount ***
Dec  7 04:00:55 np0005549474 dracut[1292]: *** Including module: base ***
Dec  7 04:00:55 np0005549474 dracut[1292]: *** Including module: fs-lib ***
Dec  7 04:00:55 np0005549474 dracut[1292]: *** Including module: kdumpbase ***
Dec  7 04:00:55 np0005549474 dracut[1292]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  7 04:00:55 np0005549474 dracut[1292]:  microcode_ctl module: mangling fw_dir
Dec  7 04:00:55 np0005549474 dracut[1292]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  7 04:00:55 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  7 04:00:55 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel" is ignored
Dec  7 04:00:55 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  7 04:00:55 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  7 04:00:55 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  7 04:00:56 np0005549474 dracut[1292]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  7 04:00:56 np0005549474 dracut[1292]: *** Including module: openssl ***
Dec  7 04:00:56 np0005549474 dracut[1292]: *** Including module: shutdown ***
Dec  7 04:00:56 np0005549474 dracut[1292]: *** Including module: squash ***
Dec  7 04:00:56 np0005549474 dracut[1292]: *** Including modules done ***
Dec  7 04:00:56 np0005549474 dracut[1292]: *** Installing kernel module dependencies ***
Dec  7 04:00:57 np0005549474 dracut[1292]: *** Installing kernel module dependencies done ***
Dec  7 04:00:57 np0005549474 dracut[1292]: *** Resolving executable dependencies ***
Dec  7 04:00:57 np0005549474 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 04:00:59 np0005549474 dracut[1292]: *** Resolving executable dependencies done ***
Dec  7 04:00:59 np0005549474 dracut[1292]: *** Generating early-microcode cpio image ***
Dec  7 04:00:59 np0005549474 dracut[1292]: *** Store current command line parameters ***
Dec  7 04:00:59 np0005549474 dracut[1292]: Stored kernel commandline:
Dec  7 04:00:59 np0005549474 dracut[1292]: No dracut internal kernel commandline stored in the initramfs
Dec  7 04:00:59 np0005549474 dracut[1292]: *** Install squash loader ***
Dec  7 04:01:00 np0005549474 dracut[1292]: *** Squashing the files inside the initramfs ***
Dec  7 04:01:01 np0005549474 dracut[1292]: *** Squashing the files inside the initramfs done ***
Dec  7 04:01:01 np0005549474 dracut[1292]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  7 04:01:01 np0005549474 dracut[1292]: *** Hardlinking files ***
Dec  7 04:01:01 np0005549474 dracut[1292]: *** Hardlinking files done ***
Dec  7 04:01:01 np0005549474 dracut[1292]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  7 04:01:02 np0005549474 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Dec  7 04:01:02 np0005549474 kdumpctl[1019]: kdump: Starting kdump: [OK]
Dec  7 04:01:02 np0005549474 systemd[1]: Finished Crash recovery kernel arming.
Dec  7 04:01:02 np0005549474 systemd[1]: Startup finished in 1.650s (kernel) + 2.702s (initrd) + 20.943s (userspace) = 25.295s.
Dec  7 04:01:14 np0005549474 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 04:01:24 np0005549474 systemd[1]: Created slice User Slice of UID 1000.
Dec  7 04:01:24 np0005549474 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  7 04:01:24 np0005549474 systemd-logind[796]: New session 1 of user zuul.
Dec  7 04:01:24 np0005549474 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  7 04:01:24 np0005549474 systemd[1]: Starting User Manager for UID 1000...
Dec  7 04:01:24 np0005549474 systemd[4320]: Queued start job for default target Main User Target.
Dec  7 04:01:24 np0005549474 systemd[4320]: Created slice User Application Slice.
Dec  7 04:01:24 np0005549474 systemd[4320]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 04:01:24 np0005549474 systemd[4320]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 04:01:24 np0005549474 systemd[4320]: Reached target Paths.
Dec  7 04:01:24 np0005549474 systemd[4320]: Reached target Timers.
Dec  7 04:01:24 np0005549474 systemd[4320]: Starting D-Bus User Message Bus Socket...
Dec  7 04:01:24 np0005549474 systemd[4320]: Starting Create User's Volatile Files and Directories...
Dec  7 04:01:24 np0005549474 systemd[4320]: Listening on D-Bus User Message Bus Socket.
Dec  7 04:01:24 np0005549474 systemd[4320]: Reached target Sockets.
Dec  7 04:01:24 np0005549474 systemd[4320]: Finished Create User's Volatile Files and Directories.
Dec  7 04:01:24 np0005549474 systemd[4320]: Reached target Basic System.
Dec  7 04:01:24 np0005549474 systemd[4320]: Reached target Main User Target.
Dec  7 04:01:24 np0005549474 systemd[4320]: Startup finished in 165ms.
Dec  7 04:01:24 np0005549474 systemd[1]: Started User Manager for UID 1000.
Dec  7 04:01:24 np0005549474 systemd[1]: Started Session 1 of User zuul.
Dec  7 04:01:25 np0005549474 python3[4402]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:01:27 np0005549474 python3[4430]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:01:36 np0005549474 python3[4490]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:01:37 np0005549474 python3[4530]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  7 04:01:39 np0005549474 python3[4556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkQcJK9AckSLB8kRJpoBvjJlvdUM1NPOv6gh0ztTck1XCwf9cQ7K6FgbgW5Zk5QtpT2Bskyk11uc8i8c2H7S/TLAvuLME63JPzSCN4U+cOYMO66ItZhTrMa8L3fJT6S2czxsCrc3UibOY/sgobMkVnTmivIl06HznGPkKZo4Vk3Pi6+wpDXgoav0MRspeRyuteMK3loUZjYiCGyQ89o0q92X6j4eA/8+lulbNsk3A+jgjjDfevRwHrl2J9/AJjxjHcK3Z2ZeCUvL89HwqGBIcuc7rrUfMRGP4ffy9GrNlMVOWz1TxigfyNSLFnmbR3B61MrGnlsygl3l+TroIGJhPvioZx2GFfCZ+oy9Loz3KObdiKDhHEVJkjFrFUeWmTpVnLursJhZOkKKQRZXtpk+klCh6rT0/LBH1X97OuWKikCL/fEXsTM3OdQ88ahIrCanC3ox9MqZCU1b3l16zHWyU8l5D42mYN79XxFZD+xD4kH1poO/KlY+66bM73wx3JNIM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:39 np0005549474 python3[4580]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:40 np0005549474 python3[4679]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:01:40 np0005549474 python3[4750]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765098099.9870965-251-18550548754287/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b47cd6d8bda54cadb213ff8da60cb142_id_rsa follow=False checksum=16b4efc491a0b7940e21a1d94a54c06d2c2a7618 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:41 np0005549474 python3[4873]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:01:41 np0005549474 python3[4944]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765098100.962146-306-94024254647401/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b47cd6d8bda54cadb213ff8da60cb142_id_rsa.pub follow=False checksum=09ed89e17a15aaae00313e3fe40cedf6270ab77f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:43 np0005549474 python3[4992]: ansible-ping Invoked with data=pong
Dec  7 04:01:44 np0005549474 python3[5016]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:01:47 np0005549474 python3[5074]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  7 04:01:48 np0005549474 python3[5106]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:48 np0005549474 python3[5130]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:49 np0005549474 python3[5154]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:49 np0005549474 python3[5178]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:49 np0005549474 python3[5202]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:49 np0005549474 python3[5226]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:51 np0005549474 python3[5252]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:52 np0005549474 python3[5330]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:01:53 np0005549474 python3[5403]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098112.0411966-31-9295769412371/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:01:53 np0005549474 python3[5451]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:54 np0005549474 python3[5475]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:54 np0005549474 python3[5499]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:54 np0005549474 python3[5523]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:54 np0005549474 python3[5547]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:55 np0005549474 python3[5571]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:55 np0005549474 python3[5595]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:55 np0005549474 python3[5619]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:56 np0005549474 python3[5643]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:56 np0005549474 python3[5667]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:56 np0005549474 python3[5691]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:56 np0005549474 python3[5715]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:57 np0005549474 python3[5739]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:57 np0005549474 python3[5763]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:57 np0005549474 python3[5787]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:58 np0005549474 python3[5811]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:58 np0005549474 python3[5835]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:58 np0005549474 python3[5859]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:58 np0005549474 python3[5883]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:59 np0005549474 python3[5907]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:59 np0005549474 python3[5931]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:01:59 np0005549474 python3[5955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:02:00 np0005549474 python3[5979]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:02:00 np0005549474 python3[6003]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:02:00 np0005549474 python3[6027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:02:00 np0005549474 python3[6051]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:02:03 np0005549474 python3[6079]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  7 04:02:03 np0005549474 systemd[1]: Starting Time & Date Service...
Dec  7 04:02:03 np0005549474 systemd[1]: Started Time & Date Service.
Dec  7 04:02:03 np0005549474 systemd-timedated[6081]: Changed time zone to 'UTC' (UTC).
Dec  7 04:02:05 np0005549474 python3[6110]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:05 np0005549474 python3[6186]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:02:06 np0005549474 python3[6257]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765098125.369744-251-8927662937619/source _original_basename=tmpm1uhjhsk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:06 np0005549474 python3[6357]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:02:06 np0005549474 python3[6428]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765098126.344956-301-95739427037741/source _original_basename=tmpb9r436h3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:07 np0005549474 python3[6530]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:02:08 np0005549474 python3[6603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765098127.740368-381-124247806449033/source _original_basename=tmpt5mjrgpl follow=False checksum=18e69b4e7a766afddcd5db28cd6f47889284b7a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:09 np0005549474 python3[6651]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:02:09 np0005549474 python3[6677]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:02:09 np0005549474 python3[6757]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:02:10 np0005549474 python3[6830]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098129.5318515-451-228364195053009/source _original_basename=tmp0v28ytgd follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:10 np0005549474 python3[6881]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-b378-1a30-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:02:11 np0005549474 python3[6909]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-b378-1a30-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  7 04:02:13 np0005549474 python3[6937]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:30 np0005549474 python3[6963]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:02:33 np0005549474 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  7 04:03:15 np0005549474 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  7 04:03:15 np0005549474 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.8795] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  7 04:03:15 np0005549474 systemd-udevd[6966]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.8996] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9040] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9047] device (eth1): carrier: link connected
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9050] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9061] policy: auto-activating connection 'Wired connection 1' (d935b84f-1e5c-351e-908e-836d88ed6060)
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9069] device (eth1): Activation: starting connection 'Wired connection 1' (d935b84f-1e5c-351e-908e-836d88ed6060)
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9071] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9077] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9085] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:03:15 np0005549474 NetworkManager[860]: <info>  [1765098195.9093] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:03:17 np0005549474 python3[6993]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-692e-34c8-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:03:27 np0005549474 python3[7073]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:03:27 np0005549474 python3[7146]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765098206.9210308-104-128072717293844/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b3e2520b189e417abd374586df57eaeb1b6c9085 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:03:28 np0005549474 python3[7196]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:03:28 np0005549474 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  7 04:03:28 np0005549474 systemd[1]: Stopped Network Manager Wait Online.
Dec  7 04:03:28 np0005549474 systemd[1]: Stopping Network Manager Wait Online...
Dec  7 04:03:28 np0005549474 systemd[1]: Stopping Network Manager...
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8370] caught SIGTERM, shutting down normally.
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8378] dhcp4 (eth0): canceled DHCP transaction
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8378] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8378] dhcp4 (eth0): state changed no lease
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8380] manager: NetworkManager state is now CONNECTING
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8442] dhcp4 (eth1): canceled DHCP transaction
Dec  7 04:03:28 np0005549474 NetworkManager[860]: <info>  [1765098208.8442] dhcp4 (eth1): state changed no lease
Dec  7 04:03:28 np0005549474 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 04:03:28 np0005549474 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 04:03:29 np0005549474 NetworkManager[860]: <info>  [1765098209.1550] exiting (success)
Dec  7 04:03:29 np0005549474 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  7 04:03:29 np0005549474 systemd[1]: Stopped Network Manager.
Dec  7 04:03:29 np0005549474 systemd[1]: NetworkManager.service: Consumed 1.072s CPU time, 9.9M memory peak.
Dec  7 04:03:29 np0005549474 systemd[1]: Starting Network Manager...
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.2296] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4452dece-8eac-4524-b110-088a9e058714)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.2297] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.2388] manager[0x561aea98a070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  7 04:03:29 np0005549474 systemd[1]: Starting Hostname Service...
Dec  7 04:03:29 np0005549474 systemd[1]: Started Hostname Service.
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3239] hostname: hostname: using hostnamed
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3240] hostname: static hostname changed from (none) to "np0005549474.novalocal"
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3246] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3253] manager[0x561aea98a070]: rfkill: Wi-Fi hardware radio set enabled
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3254] manager[0x561aea98a070]: rfkill: WWAN hardware radio set enabled
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3298] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3298] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3299] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3300] manager: Networking is enabled by state file
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3304] settings: Loaded settings plugin: keyfile (internal)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3318] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3358] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3371] dhcp: init: Using DHCP client 'internal'
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3376] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3384] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3392] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3405] device (lo): Activation: starting connection 'lo' (95a1d56c-e265-4e9f-bb61-bafa31bf60dd)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3415] device (eth0): carrier: link connected
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3421] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3428] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3429] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3438] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3448] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3457] device (eth1): carrier: link connected
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3464] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3472] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d935b84f-1e5c-351e-908e-836d88ed6060) (indicated)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3472] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3480] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3490] device (eth1): Activation: starting connection 'Wired connection 1' (d935b84f-1e5c-351e-908e-836d88ed6060)
Dec  7 04:03:29 np0005549474 systemd[1]: Started Network Manager.
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3499] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3506] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3509] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3511] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3514] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3518] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3521] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3525] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3529] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3538] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3541] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3561] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3564] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3588] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3594] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3604] device (lo): Activation: successful, device activated.
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3615] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.3626] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  7 04:03:29 np0005549474 systemd[1]: Starting Network Manager Wait Online...
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.5760] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.5792] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.5794] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.5797] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.5800] device (eth0): Activation: successful, device activated.
Dec  7 04:03:29 np0005549474 NetworkManager[7214]: <info>  [1765098209.5804] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  7 04:03:29 np0005549474 python3[7262]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-692e-34c8-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:03:39 np0005549474 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 04:03:59 np0005549474 systemd[4320]: Starting Mark boot as successful...
Dec  7 04:03:59 np0005549474 systemd[4320]: Finished Mark boot as successful.
Dec  7 04:03:59 np0005549474 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.2854] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 04:04:14 np0005549474 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 04:04:14 np0005549474 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3264] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3268] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3277] device (eth1): Activation: successful, device activated.
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3283] manager: startup complete
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3285] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <warn>  [1765098254.3291] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3301] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 systemd[1]: Finished Network Manager Wait Online.
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3523] dhcp4 (eth1): canceled DHCP transaction
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3524] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3524] dhcp4 (eth1): state changed no lease
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3544] policy: auto-activating connection 'ci-private-network' (602efafc-97e2-5187-a2f5-d02f2fa9f512)
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3551] device (eth1): Activation: starting connection 'ci-private-network' (602efafc-97e2-5187-a2f5-d02f2fa9f512)
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3552] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3556] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3566] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3578] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3623] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3625] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:04:14 np0005549474 NetworkManager[7214]: <info>  [1765098254.3633] device (eth1): Activation: successful, device activated.
Dec  7 04:04:24 np0005549474 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 04:04:29 np0005549474 systemd-logind[796]: Session 1 logged out. Waiting for processes to exit.
Dec  7 04:05:27 np0005549474 systemd-logind[796]: New session 3 of user zuul.
Dec  7 04:05:27 np0005549474 systemd[1]: Started Session 3 of User zuul.
Dec  7 04:05:27 np0005549474 python3[7395]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:05:28 np0005549474 python3[7468]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098327.4971573-373-98061901602273/source _original_basename=tmpis43st_s follow=False checksum=57bca5a761f595fa34860f9325990c87e5f7eb2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:05:32 np0005549474 systemd[1]: session-3.scope: Deactivated successfully.
Dec  7 04:05:32 np0005549474 systemd-logind[796]: Session 3 logged out. Waiting for processes to exit.
Dec  7 04:05:32 np0005549474 systemd-logind[796]: Removed session 3.
Dec  7 04:06:59 np0005549474 systemd[4320]: Created slice User Background Tasks Slice.
Dec  7 04:06:59 np0005549474 systemd[4320]: Starting Cleanup of User's Temporary Files and Directories...
Dec  7 04:06:59 np0005549474 systemd[4320]: Finished Cleanup of User's Temporary Files and Directories.
Dec  7 04:12:49 np0005549474 systemd-logind[796]: New session 4 of user zuul.
Dec  7 04:12:49 np0005549474 systemd[1]: Started Session 4 of User zuul.
Dec  7 04:12:50 np0005549474 python3[7544]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-3a55-99f4-000000001cea-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:12:50 np0005549474 python3[7572]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:12:50 np0005549474 python3[7599]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:12:51 np0005549474 python3[7625]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:12:51 np0005549474 python3[7651]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:12:51 np0005549474 python3[7677]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:12:52 np0005549474 python3[7755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:12:52 np0005549474 python3[7828]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098772.0907822-518-84555322267254/source _original_basename=tmpp_27tfyc follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:12:53 np0005549474 python3[7878]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 04:12:53 np0005549474 systemd[1]: Reloading.
Dec  7 04:12:53 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:12:55 np0005549474 python3[7934]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  7 04:12:55 np0005549474 python3[7960]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:12:56 np0005549474 python3[7988]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:12:56 np0005549474 python3[8016]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:12:56 np0005549474 python3[8044]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:12:57 np0005549474 python3[8071]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-3a55-99f4-000000001cf1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:12:57 np0005549474 python3[8101]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:13:00 np0005549474 systemd-logind[796]: Session 4 logged out. Waiting for processes to exit.
Dec  7 04:13:00 np0005549474 systemd[1]: session-4.scope: Deactivated successfully.
Dec  7 04:13:00 np0005549474 systemd[1]: session-4.scope: Consumed 4.388s CPU time.
Dec  7 04:13:00 np0005549474 systemd-logind[796]: Removed session 4.
Dec  7 04:13:02 np0005549474 systemd-logind[796]: New session 5 of user zuul.
Dec  7 04:13:02 np0005549474 systemd[1]: Started Session 5 of User zuul.
Dec  7 04:13:02 np0005549474 python3[8134]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 04:13:17 np0005549474 kernel: SELinux:  Converting 386 SID table entries...
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:13:17 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:13:27 np0005549474 kernel: SELinux:  Converting 386 SID table entries...
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:13:27 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:13:37 np0005549474 kernel: SELinux:  Converting 386 SID table entries...
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:13:37 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:13:38 np0005549474 setsebool[8203]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  7 04:13:38 np0005549474 setsebool[8203]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  7 04:13:49 np0005549474 kernel: SELinux:  Converting 389 SID table entries...
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:13:49 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:14:06 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  7 04:14:07 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:14:07 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:14:07 np0005549474 systemd[1]: Reloading.
Dec  7 04:14:07 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:14:07 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:14:10 np0005549474 python3[10448]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6949-57be-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:14:10 np0005549474 kernel: evm: overlay not supported
Dec  7 04:14:10 np0005549474 systemd[4320]: Starting D-Bus User Message Bus...
Dec  7 04:14:10 np0005549474 dbus-broker-launch[11393]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  7 04:14:10 np0005549474 dbus-broker-launch[11393]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  7 04:14:10 np0005549474 systemd[4320]: Started D-Bus User Message Bus.
Dec  7 04:14:10 np0005549474 dbus-broker-lau[11393]: Ready
Dec  7 04:14:10 np0005549474 systemd[4320]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  7 04:14:10 np0005549474 systemd[4320]: Created slice Slice /user.
Dec  7 04:14:10 np0005549474 systemd[4320]: podman-11223.scope: unit configures an IP firewall, but not running as root.
Dec  7 04:14:10 np0005549474 systemd[4320]: (This warning is only shown for the first unit using IP firewalling.)
Dec  7 04:14:10 np0005549474 systemd[4320]: Started podman-11223.scope.
Dec  7 04:14:11 np0005549474 systemd[4320]: Started podman-pause-1fe3f207.scope.
Dec  7 04:14:11 np0005549474 systemd[1]: session-5.scope: Deactivated successfully.
Dec  7 04:14:11 np0005549474 systemd[1]: session-5.scope: Consumed 1min 2.662s CPU time.
Dec  7 04:14:11 np0005549474 systemd-logind[796]: Session 5 logged out. Waiting for processes to exit.
Dec  7 04:14:11 np0005549474 systemd-logind[796]: Removed session 5.
Dec  7 04:14:13 np0005549474 irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  7 04:14:13 np0005549474 irqbalance[789]: IRQ 27 affinity is now unmanaged
Dec  7 04:14:30 np0005549474 systemd-logind[796]: New session 6 of user zuul.
Dec  7 04:14:30 np0005549474 systemd[1]: Started Session 6 of User zuul.
Dec  7 04:14:31 np0005549474 python3[20191]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEBTBhBZP90LzstcRNaMEaJYA9StP5JdyPfNDHacfdtvJAhV3TPbWHNVN0Z+oo6KXJ9tO3/Fc2SBfhpFcx8Lls= zuul@np0005549473.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:14:31 np0005549474 python3[20351]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEBTBhBZP90LzstcRNaMEaJYA9StP5JdyPfNDHacfdtvJAhV3TPbWHNVN0Z+oo6KXJ9tO3/Fc2SBfhpFcx8Lls= zuul@np0005549473.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:14:32 np0005549474 python3[20679]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005549474.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  7 04:14:32 np0005549474 python3[20886]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEBTBhBZP90LzstcRNaMEaJYA9StP5JdyPfNDHacfdtvJAhV3TPbWHNVN0Z+oo6KXJ9tO3/Fc2SBfhpFcx8Lls= zuul@np0005549473.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 04:14:33 np0005549474 python3[21131]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:14:33 np0005549474 python3[21353]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098873.0584817-150-167231254668509/source _original_basename=tmpev_57cjk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:14:34 np0005549474 python3[21655]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  7 04:14:34 np0005549474 systemd[1]: Starting Hostname Service...
Dec  7 04:14:34 np0005549474 systemd[1]: Started Hostname Service.
Dec  7 04:14:34 np0005549474 systemd-hostnamed[21766]: Changed pretty hostname to 'compute-0'
Dec  7 04:14:34 np0005549474 systemd-hostnamed[21766]: Hostname set to <compute-0> (static)
Dec  7 04:14:34 np0005549474 NetworkManager[7214]: <info>  [1765098874.9611] hostname: static hostname changed from "np0005549474.novalocal" to "compute-0"
Dec  7 04:14:34 np0005549474 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 04:14:34 np0005549474 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 04:14:35 np0005549474 systemd[1]: session-6.scope: Deactivated successfully.
Dec  7 04:14:35 np0005549474 systemd[1]: session-6.scope: Consumed 2.668s CPU time.
Dec  7 04:14:35 np0005549474 systemd-logind[796]: Session 6 logged out. Waiting for processes to exit.
Dec  7 04:14:35 np0005549474 systemd-logind[796]: Removed session 6.
Dec  7 04:14:45 np0005549474 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 04:15:02 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:15:02 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:15:02 np0005549474 systemd[1]: man-db-cache-update.service: Consumed 1min 6.040s CPU time.
Dec  7 04:15:02 np0005549474 systemd[1]: run-r30ceb14fe15e41abad8d63b3401f0302.service: Deactivated successfully.
Dec  7 04:15:05 np0005549474 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 04:15:59 np0005549474 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  7 04:15:59 np0005549474 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  7 04:15:59 np0005549474 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  7 04:15:59 np0005549474 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  7 04:18:10 np0005549474 systemd-logind[796]: New session 7 of user zuul.
Dec  7 04:18:10 np0005549474 systemd[1]: Started Session 7 of User zuul.
Dec  7 04:18:10 np0005549474 python3[30080]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:18:12 np0005549474 python3[30196]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:13 np0005549474 python3[30269]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:13 np0005549474 python3[30295]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:13 np0005549474 python3[30368]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:14 np0005549474 python3[30394]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:14 np0005549474 python3[30467]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:14 np0005549474 python3[30493]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:15 np0005549474 python3[30566]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:15 np0005549474 python3[30592]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:15 np0005549474 python3[30665]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:16 np0005549474 python3[30691]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:16 np0005549474 python3[30764]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:16 np0005549474 python3[30790]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:18:17 np0005549474 python3[30863]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4576225-33940-254666842522370/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:18:23 np0005549474 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  7 04:18:23 np0005549474 irqbalance[789]: IRQ 26 affinity is now unmanaged
Dec  7 04:18:28 np0005549474 python3[30922]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:23:28 np0005549474 systemd-logind[796]: Session 7 logged out. Waiting for processes to exit.
Dec  7 04:23:28 np0005549474 systemd[1]: session-7.scope: Deactivated successfully.
Dec  7 04:23:28 np0005549474 systemd[1]: session-7.scope: Consumed 5.420s CPU time.
Dec  7 04:23:28 np0005549474 systemd-logind[796]: Removed session 7.
Dec  7 04:30:05 np0005549474 systemd-logind[796]: New session 8 of user zuul.
Dec  7 04:30:05 np0005549474 systemd[1]: Started Session 8 of User zuul.
Dec  7 04:30:06 np0005549474 python3.9[31111]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:30:08 np0005549474 python3.9[31292]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:30:15 np0005549474 systemd-logind[796]: Session 8 logged out. Waiting for processes to exit.
Dec  7 04:30:15 np0005549474 systemd[1]: session-8.scope: Deactivated successfully.
Dec  7 04:30:15 np0005549474 systemd[1]: session-8.scope: Consumed 7.834s CPU time.
Dec  7 04:30:15 np0005549474 systemd-logind[796]: Removed session 8.
Dec  7 04:30:31 np0005549474 systemd-logind[796]: New session 9 of user zuul.
Dec  7 04:30:31 np0005549474 systemd[1]: Started Session 9 of User zuul.
Dec  7 04:30:31 np0005549474 python3.9[31504]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  7 04:30:33 np0005549474 python3.9[31678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:30:34 np0005549474 python3.9[31830]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:30:35 np0005549474 python3.9[31983]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:30:36 np0005549474 python3.9[32135]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:30:36 np0005549474 python3.9[32287]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:30:37 np0005549474 python3.9[32410]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765099836.497093-177-89253256865342/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:30:38 np0005549474 python3.9[32562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:30:39 np0005549474 python3.9[32718]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:30:40 np0005549474 python3.9[32870]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:30:41 np0005549474 python3.9[33020]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:30:47 np0005549474 python3.9[33273]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:30:48 np0005549474 python3.9[33423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:30:49 np0005549474 python3.9[33577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:30:50 np0005549474 python3.9[33735]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:30:51 np0005549474 python3.9[33819]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:31:32 np0005549474 systemd[1]: Reloading.
Dec  7 04:31:33 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:31:33 np0005549474 systemd[1]: Starting dnf makecache...
Dec  7 04:31:33 np0005549474 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  7 04:31:33 np0005549474 dnf[34031]: Failed determining last makecache time.
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-barbican-42b4c41831408a8e323 147 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 195 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-cinder-1c00d6490d88e436f26ef 193 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 systemd[1]: Reloading.
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-python-stevedore-c4acc5639fd2329372142 195 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-python-cloudkitty-tests-tempest-2c80f8 205 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 160 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 147 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-python-designate-tests-tempest-347fdbc 162 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-glance-1fd12c29b339f30fe823e 190 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 196 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-manila-3c01b7181572c95dac462 193 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-python-whitebox-neutron-tests-tempest- 187 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-octavia-ba397f07a7331190208c 163 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-watcher-c014f81a8647287f6dcc 193 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-ansible-config_template-5ccaa22121a7ff 183 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 189 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-swift-dc98a8463506ac520c469a 207 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-python-tempestconf-8515371b7cceebd4282 203 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 dnf[34031]: delorean-openstack-heat-ui-013accbfd179753bc3f0 163 kB/s | 3.0 kB     00:00
Dec  7 04:31:33 np0005549474 systemd[1]: Reloading.
Dec  7 04:31:33 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:31:33 np0005549474 dnf[34031]: CentOS Stream 9 - BaseOS                         88 kB/s | 7.3 kB     00:00
Dec  7 04:31:33 np0005549474 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  7 04:31:33 np0005549474 dnf[34031]: CentOS Stream 9 - AppStream                      91 kB/s | 7.4 kB     00:00
Dec  7 04:31:34 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:31:34 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:31:34 np0005549474 dnf[34031]: CentOS Stream 9 - CRB                            66 kB/s | 7.2 kB     00:00
Dec  7 04:31:34 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:31:34 np0005549474 dnf[34031]: CentOS Stream 9 - Extras packages                70 kB/s | 8.3 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: dlrn-antelope-testing                           158 kB/s | 3.0 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: dlrn-antelope-build-deps                        179 kB/s | 3.0 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: centos9-rabbitmq                                129 kB/s | 3.0 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: centos9-storage                                 145 kB/s | 3.0 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: centos9-opstools                                137 kB/s | 3.0 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: NFV SIG OpenvSwitch                             146 kB/s | 3.0 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: repo-setup-centos-appstream                     191 kB/s | 4.4 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: repo-setup-centos-baseos                        162 kB/s | 3.9 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: repo-setup-centos-highavailability              176 kB/s | 3.9 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: repo-setup-centos-powertools                    191 kB/s | 4.3 kB     00:00
Dec  7 04:31:34 np0005549474 dnf[34031]: Extra Packages for Enterprise Linux 9 - x86_64  245 kB/s |  32 kB     00:00
Dec  7 04:31:35 np0005549474 dnf[34031]: Metadata cache created.
Dec  7 04:31:35 np0005549474 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  7 04:31:35 np0005549474 systemd[1]: Finished dnf makecache.
Dec  7 04:31:35 np0005549474 systemd[1]: dnf-makecache.service: Consumed 1.608s CPU time.
Dec  7 04:32:36 np0005549474 kernel: SELinux:  Converting 2717 SID table entries...
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:32:36 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:32:36 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  7 04:32:36 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:32:36 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:32:36 np0005549474 systemd[1]: Reloading.
Dec  7 04:32:37 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:32:37 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:32:38 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:32:38 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:32:38 np0005549474 systemd[1]: man-db-cache-update.service: Consumed 1.426s CPU time.
Dec  7 04:32:38 np0005549474 systemd[1]: run-r74d66a72142048aca51f17a7f6d76c9e.service: Deactivated successfully.
Dec  7 04:32:39 np0005549474 python3.9[35398]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:32:42 np0005549474 python3.9[35679]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  7 04:32:43 np0005549474 python3.9[35831]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  7 04:32:46 np0005549474 python3.9[35984]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:32:47 np0005549474 python3.9[36139]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  7 04:32:50 np0005549474 python3.9[36293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:33:00 np0005549474 python3.9[36445]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:33:00 np0005549474 python3.9[36568]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765099974.2888887-666-154408147231169/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:33:02 np0005549474 python3.9[36720]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:33:02 np0005549474 python3.9[36872]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:33:03 np0005549474 python3.9[37025]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:33:05 np0005549474 python3.9[37177]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  7 04:33:06 np0005549474 python3.9[37330]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 04:33:06 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:33:07 np0005549474 python3.9[37489]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  7 04:33:07 np0005549474 python3.9[37649]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  7 04:33:08 np0005549474 python3.9[37802]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 04:33:09 np0005549474 python3.9[37960]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  7 04:33:10 np0005549474 python3.9[38112]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:33:13 np0005549474 python3.9[38265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:33:13 np0005549474 python3.9[38417]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:33:14 np0005549474 python3.9[38540]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765099993.4171722-1023-171671024867695/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:33:15 np0005549474 python3.9[38692]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:33:16 np0005549474 systemd[1]: Starting Load Kernel Modules...
Dec  7 04:33:16 np0005549474 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  7 04:33:16 np0005549474 kernel: Bridge firewalling registered
Dec  7 04:33:16 np0005549474 systemd-modules-load[38696]: Inserted module 'br_netfilter'
Dec  7 04:33:16 np0005549474 systemd[1]: Finished Load Kernel Modules.
Dec  7 04:33:17 np0005549474 python3.9[38851]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:33:18 np0005549474 python3.9[38974]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765099997.1376214-1092-89854100876177/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:33:19 np0005549474 python3.9[39127]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:33:23 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:33:23 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:33:24 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:33:24 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:33:24 np0005549474 systemd[1]: Reloading.
Dec  7 04:33:24 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:33:24 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:33:27 np0005549474 python3.9[40346]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:33:27 np0005549474 python3.9[41225]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  7 04:33:28 np0005549474 python3.9[42043]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:33:30 np0005549474 python3.9[42978]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:33:30 np0005549474 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  7 04:33:30 np0005549474 systemd[1]: Starting Authorization Manager...
Dec  7 04:33:30 np0005549474 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  7 04:33:30 np0005549474 polkitd[43535]: Started polkitd version 0.117
Dec  7 04:33:30 np0005549474 systemd[1]: Started Authorization Manager.
Dec  7 04:33:30 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:33:30 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:33:30 np0005549474 systemd[1]: man-db-cache-update.service: Consumed 5.477s CPU time.
Dec  7 04:33:30 np0005549474 systemd[1]: run-r687d0eda43824a599d8d7fb57dbe13a2.service: Deactivated successfully.
Dec  7 04:33:31 np0005549474 python3.9[43706]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:33:32 np0005549474 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  7 04:33:32 np0005549474 systemd[1]: tuned.service: Deactivated successfully.
Dec  7 04:33:32 np0005549474 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  7 04:33:32 np0005549474 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  7 04:33:33 np0005549474 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  7 04:33:33 np0005549474 python3.9[43867]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  7 04:33:37 np0005549474 python3.9[44019]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:33:37 np0005549474 systemd[1]: Reloading.
Dec  7 04:33:37 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:33:38 np0005549474 python3.9[44208]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:33:38 np0005549474 systemd[1]: Reloading.
Dec  7 04:33:38 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:33:40 np0005549474 python3.9[44397]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:33:41 np0005549474 python3.9[44550]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:33:41 np0005549474 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  7 04:33:42 np0005549474 python3.9[44703]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:33:44 np0005549474 python3.9[44865]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:33:45 np0005549474 python3.9[45018]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:33:45 np0005549474 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  7 04:33:45 np0005549474 systemd[1]: Stopped Apply Kernel Variables.
Dec  7 04:33:45 np0005549474 systemd[1]: Stopping Apply Kernel Variables...
Dec  7 04:33:45 np0005549474 systemd[1]: Starting Apply Kernel Variables...
Dec  7 04:33:45 np0005549474 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  7 04:33:45 np0005549474 systemd[1]: Finished Apply Kernel Variables.
Dec  7 04:33:46 np0005549474 systemd[1]: session-9.scope: Deactivated successfully.
Dec  7 04:33:46 np0005549474 systemd[1]: session-9.scope: Consumed 2min 12.134s CPU time.
Dec  7 04:33:46 np0005549474 systemd-logind[796]: Session 9 logged out. Waiting for processes to exit.
Dec  7 04:33:46 np0005549474 systemd-logind[796]: Removed session 9.
Dec  7 04:33:51 np0005549474 systemd-logind[796]: New session 10 of user zuul.
Dec  7 04:33:51 np0005549474 systemd[1]: Started Session 10 of User zuul.
Dec  7 04:33:52 np0005549474 python3.9[45202]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:33:53 np0005549474 python3.9[45358]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  7 04:33:54 np0005549474 python3.9[45511]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 04:33:55 np0005549474 python3.9[45669]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  7 04:33:56 np0005549474 python3.9[45829]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:33:57 np0005549474 python3.9[45913]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  7 04:34:00 np0005549474 python3.9[46077]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:34:12 np0005549474 kernel: SELinux:  Converting 2729 SID table entries...
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:34:12 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:34:13 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  7 04:34:13 np0005549474 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  7 04:34:14 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:34:15 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:34:15 np0005549474 systemd[1]: Reloading.
Dec  7 04:34:15 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:34:15 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:34:15 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:34:15 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:34:15 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:34:15 np0005549474 systemd[1]: run-r839e0e6a289447de965bbba77b1e7fe4.service: Deactivated successfully.
Dec  7 04:34:16 np0005549474 python3.9[47177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:34:16 np0005549474 systemd[1]: Reloading.
Dec  7 04:34:17 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:34:17 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:34:17 np0005549474 systemd[1]: Starting Open vSwitch Database Unit...
Dec  7 04:34:17 np0005549474 chown[47219]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  7 04:34:17 np0005549474 ovs-ctl[47224]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  7 04:34:17 np0005549474 ovs-ctl[47224]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  7 04:34:17 np0005549474 ovs-ctl[47224]: Starting ovsdb-server [  OK  ]
Dec  7 04:34:17 np0005549474 ovs-vsctl[47273]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  7 04:34:17 np0005549474 ovs-vsctl[47293]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"8da81261-a5d6-4df8-aa54-d9c0c8f72a67\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  7 04:34:17 np0005549474 ovs-ctl[47224]: Configuring Open vSwitch system IDs [  OK  ]
Dec  7 04:34:17 np0005549474 ovs-ctl[47224]: Enabling remote OVSDB managers [  OK  ]
Dec  7 04:34:17 np0005549474 ovs-vsctl[47299]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  7 04:34:17 np0005549474 systemd[1]: Started Open vSwitch Database Unit.
Dec  7 04:34:17 np0005549474 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  7 04:34:17 np0005549474 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  7 04:34:17 np0005549474 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  7 04:34:17 np0005549474 kernel: openvswitch: Open vSwitch switching datapath
Dec  7 04:34:17 np0005549474 ovs-ctl[47344]: Inserting openvswitch module [  OK  ]
Dec  7 04:34:17 np0005549474 ovs-ctl[47313]: Starting ovs-vswitchd [  OK  ]
Dec  7 04:34:17 np0005549474 ovs-vsctl[47361]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  7 04:34:17 np0005549474 ovs-ctl[47313]: Enabling remote OVSDB managers [  OK  ]
Dec  7 04:34:17 np0005549474 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  7 04:34:17 np0005549474 systemd[1]: Starting Open vSwitch...
Dec  7 04:34:17 np0005549474 systemd[1]: Finished Open vSwitch.
Dec  7 04:34:19 np0005549474 python3.9[47513]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:34:21 np0005549474 python3.9[47665]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  7 04:34:22 np0005549474 kernel: SELinux:  Converting 2743 SID table entries...
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:34:22 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:34:23 np0005549474 python3.9[47820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:34:24 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  7 04:34:24 np0005549474 python3.9[47978]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:34:28 np0005549474 python3.9[48133]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:34:29 np0005549474 python3.9[48420]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  7 04:34:30 np0005549474 python3.9[48570]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:34:31 np0005549474 python3.9[48724]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:34:34 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:34:34 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:34:34 np0005549474 systemd[1]: Reloading.
Dec  7 04:34:34 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:34:34 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:34:34 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:34:35 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:34:35 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:34:35 np0005549474 systemd[1]: run-rba6fe9cb1dab4185b4269d0583bb34e3.service: Deactivated successfully.
Dec  7 04:34:36 np0005549474 python3.9[49042]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:34:36 np0005549474 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  7 04:34:36 np0005549474 systemd[1]: Stopped Network Manager Wait Online.
Dec  7 04:34:36 np0005549474 systemd[1]: Stopping Network Manager Wait Online...
Dec  7 04:34:36 np0005549474 systemd[1]: Stopping Network Manager...
Dec  7 04:34:36 np0005549474 NetworkManager[7214]: <info>  [1765100076.1715] caught SIGTERM, shutting down normally.
Dec  7 04:34:36 np0005549474 NetworkManager[7214]: <info>  [1765100076.1740] dhcp4 (eth0): canceled DHCP transaction
Dec  7 04:34:36 np0005549474 NetworkManager[7214]: <info>  [1765100076.1740] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:34:36 np0005549474 NetworkManager[7214]: <info>  [1765100076.1740] dhcp4 (eth0): state changed no lease
Dec  7 04:34:36 np0005549474 NetworkManager[7214]: <info>  [1765100076.1750] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 04:34:36 np0005549474 NetworkManager[7214]: <info>  [1765100076.1845] exiting (success)
Dec  7 04:34:36 np0005549474 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 04:34:36 np0005549474 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  7 04:34:36 np0005549474 systemd[1]: Stopped Network Manager.
Dec  7 04:34:36 np0005549474 systemd[1]: NetworkManager.service: Consumed 12.831s CPU time, 4.1M memory peak, read 0B from disk, written 29.5K to disk.
Dec  7 04:34:36 np0005549474 systemd[1]: Starting Network Manager...
Dec  7 04:34:36 np0005549474 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.2444] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4452dece-8eac-4524-b110-088a9e058714)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.2444] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.2512] manager[0x55ea8ec19090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  7 04:34:36 np0005549474 systemd[1]: Starting Hostname Service...
Dec  7 04:34:36 np0005549474 systemd[1]: Started Hostname Service.
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3548] hostname: hostname: using hostnamed
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3549] hostname: static hostname changed from (none) to "compute-0"
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3553] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3557] manager[0x55ea8ec19090]: rfkill: Wi-Fi hardware radio set enabled
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3558] manager[0x55ea8ec19090]: rfkill: WWAN hardware radio set enabled
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3575] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3583] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3584] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3584] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3584] manager: Networking is enabled by state file
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3586] settings: Loaded settings plugin: keyfile (internal)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3589] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3608] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3614] dhcp: init: Using DHCP client 'internal'
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3616] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3620] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3624] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3629] device (lo): Activation: starting connection 'lo' (95a1d56c-e265-4e9f-bb61-bafa31bf60dd)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3634] device (eth0): carrier: link connected
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3637] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3640] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3641] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3644] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3649] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3653] device (eth1): carrier: link connected
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3656] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3659] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (602efafc-97e2-5187-a2f5-d02f2fa9f512) (indicated)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3659] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3663] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3667] device (eth1): Activation: starting connection 'ci-private-network' (602efafc-97e2-5187-a2f5-d02f2fa9f512)
Dec  7 04:34:36 np0005549474 systemd[1]: Started Network Manager.
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3675] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3685] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3686] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3687] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3695] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3696] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3698] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3699] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3713] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3719] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3721] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3730] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3743] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3755] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3756] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3763] device (lo): Activation: successful, device activated.
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3773] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3774] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3777] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3780] device (eth1): Activation: successful, device activated.
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3788] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  7 04:34:36 np0005549474 systemd[1]: Starting Network Manager Wait Online...
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3795] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3868] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3891] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3893] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3896] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3900] device (eth0): Activation: successful, device activated.
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3905] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  7 04:34:36 np0005549474 NetworkManager[49051]: <info>  [1765100076.3907] manager: startup complete
Dec  7 04:34:36 np0005549474 systemd[1]: Finished Network Manager Wait Online.
Dec  7 04:34:37 np0005549474 python3.9[49268]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:34:42 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:34:42 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:34:42 np0005549474 systemd[1]: Reloading.
Dec  7 04:34:42 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:34:42 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:34:43 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:34:43 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:34:43 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:34:43 np0005549474 systemd[1]: run-r30ccef00d8c245998203a3b2f9dcabf2.service: Deactivated successfully.
Dec  7 04:34:45 np0005549474 python3.9[49727]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:34:46 np0005549474 python3.9[49879]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:46 np0005549474 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 04:34:47 np0005549474 python3.9[50033]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:47 np0005549474 python3.9[50185]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:48 np0005549474 python3.9[50337]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:49 np0005549474 python3.9[50489]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:50 np0005549474 python3.9[50641]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:34:50 np0005549474 python3.9[50764]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100089.8383565-647-60159552211266/.source _original_basename=.bjqs00h3 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:51 np0005549474 python3.9[50916]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:53 np0005549474 python3.9[51068]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  7 04:34:53 np0005549474 python3.9[51220]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:34:56 np0005549474 python3.9[51647]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  7 04:34:57 np0005549474 ansible-async_wrapper.py[51822]: Invoked with j66511196254 300 /home/zuul/.ansible/tmp/ansible-tmp-1765100096.9027739-845-255208774550928/AnsiballZ_edpm_os_net_config.py _
Dec  7 04:34:57 np0005549474 ansible-async_wrapper.py[51825]: Starting module and watcher
Dec  7 04:34:57 np0005549474 ansible-async_wrapper.py[51825]: Start watching 51826 (300)
Dec  7 04:34:57 np0005549474 ansible-async_wrapper.py[51826]: Start module (51826)
Dec  7 04:34:57 np0005549474 ansible-async_wrapper.py[51822]: Return async_wrapper task started.
Dec  7 04:34:58 np0005549474 python3.9[51827]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  7 04:34:58 np0005549474 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  7 04:34:58 np0005549474 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  7 04:34:58 np0005549474 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  7 04:34:58 np0005549474 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  7 04:34:58 np0005549474 kernel: cfg80211: failed to load regulatory.db
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1053] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1067] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1618] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1619] audit: op="connection-add" uuid="7efb9ec6-3789-482c-b2c3-a90a52cf539b" name="br-ex-br" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1634] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1635] audit: op="connection-add" uuid="1eab5144-44c7-4da7-8d36-d45752f2209a" name="br-ex-port" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1647] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1648] audit: op="connection-add" uuid="89d4364c-98f3-45b7-b320-6b8f1774d55f" name="eth1-port" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1658] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1659] audit: op="connection-add" uuid="57b2ccf1-98c9-4a45-9007-89fce97cc724" name="vlan20-port" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1669] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1670] audit: op="connection-add" uuid="c1c42c21-c24d-4c3e-994e-2f12ce684a8b" name="vlan21-port" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1681] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1682] audit: op="connection-add" uuid="0aa980f9-6b0a-44cd-b915-d821ea2e838c" name="vlan22-port" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1692] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1693] audit: op="connection-add" uuid="4b4119e2-1398-421a-9bbf-e4c034867762" name="vlan23-port" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1710] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1726] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1727] audit: op="connection-add" uuid="2be7437d-5211-4ce1-abfc-ffca53e27abe" name="br-ex-if" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1768] audit: op="connection-update" uuid="602efafc-97e2-5187-a2f5-d02f2fa9f512" name="ci-private-network" args="ovs-external-ids.data,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routes,ipv4.addresses,ipv4.routing-rules,connection.slave-type,connection.controller,connection.timestamp,connection.master,connection.port-type,ovs-interface.type,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1783] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1785] audit: op="connection-add" uuid="df4e4e96-7d53-4245-b045-ba0897113227" name="vlan20-if" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1800] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1801] audit: op="connection-add" uuid="f3ea5e0a-83ef-4f37-98d0-924a6af76fae" name="vlan21-if" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1815] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1816] audit: op="connection-add" uuid="6e4d0234-caed-4439-ab64-af5a25add9e1" name="vlan22-if" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1831] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1832] audit: op="connection-add" uuid="6ee2a341-ceaf-4d47-81da-e9dc90aceee6" name="vlan23-if" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1843] audit: op="connection-delete" uuid="d935b84f-1e5c-351e-908e-836d88ed6060" name="Wired connection 1" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1855] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1864] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1868] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7efb9ec6-3789-482c-b2c3-a90a52cf539b)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1868] audit: op="connection-activate" uuid="7efb9ec6-3789-482c-b2c3-a90a52cf539b" name="br-ex-br" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1870] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1875] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1878] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1eab5144-44c7-4da7-8d36-d45752f2209a)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1879] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1883] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1886] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (89d4364c-98f3-45b7-b320-6b8f1774d55f)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1888] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1893] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1895] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (57b2ccf1-98c9-4a45-9007-89fce97cc724)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1896] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1901] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1903] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c1c42c21-c24d-4c3e-994e-2f12ce684a8b)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1904] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1909] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1911] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (0aa980f9-6b0a-44cd-b915-d821ea2e838c)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1913] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1917] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1921] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4b4119e2-1398-421a-9bbf-e4c034867762)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1921] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1923] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1924] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1929] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1932] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1934] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (2be7437d-5211-4ce1-abfc-ffca53e27abe)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1935] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1937] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1938] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1939] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1939] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1947] device (eth1): disconnecting for new activation request.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1948] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1950] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1951] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1953] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1954] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1957] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1960] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (df4e4e96-7d53-4245-b045-ba0897113227)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1961] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1963] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1964] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1965] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1967] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1970] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1973] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f3ea5e0a-83ef-4f37-98d0-924a6af76fae)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1974] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1976] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1977] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1978] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1980] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1983] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1986] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (6e4d0234-caed-4439-ab64-af5a25add9e1)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1986] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1988] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1989] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1990] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1992] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1995] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1998] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (6ee2a341-ceaf-4d47-81da-e9dc90aceee6)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.1999] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2000] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2002] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2003] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2004] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2014] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2015] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2017] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2018] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2023] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2025] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2027] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2029] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2031] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2035] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2039] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2042] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2044] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 kernel: ovs-system: entered promiscuous mode
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2060] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2063] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2066] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2068] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2072] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 kernel: Timeout policy base is empty
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2086] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2093] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2096] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 04:35:00 np0005549474 systemd-udevd[51832]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2104] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2111] dhcp4 (eth0): canceled DHCP transaction
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2111] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2112] dhcp4 (eth0): state changed no lease
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2115] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2136] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2143] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51828 uid=0 result="fail" reason="Device is not activated"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2198] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2204] dhcp4 (eth0): state changed new lease, address=38.102.83.70
Dec  7 04:35:00 np0005549474 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2261] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2275] device (eth1): disconnecting for new activation request.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2276] audit: op="connection-activate" uuid="602efafc-97e2-5187-a2f5-d02f2fa9f512" name="ci-private-network" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2276] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2412] device (eth1): Activation: starting connection 'ci-private-network' (602efafc-97e2-5187-a2f5-d02f2fa9f512)
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2418] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2423] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2438] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2446] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2451] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2459] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2465] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2471] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2476] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2478] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2480] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51828 uid=0 result="success"
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2480] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 kernel: br-ex: entered promiscuous mode
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2483] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2485] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2490] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2500] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2507] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2512] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2519] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2525] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2530] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2538] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2545] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2550] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2556] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2562] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2568] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 kernel: vlan22: entered promiscuous mode
Dec  7 04:35:00 np0005549474 systemd-udevd[51834]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2595] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2601] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2645] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2646] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2648] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2654] device (eth1): Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 kernel: vlan20: entered promiscuous mode
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2686] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 kernel: vlan21: entered promiscuous mode
Dec  7 04:35:00 np0005549474 systemd-udevd[51833]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2734] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2744] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2768] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2770] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2777] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 kernel: vlan23: entered promiscuous mode
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2798] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2806] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2815] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2850] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2864] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2876] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2900] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2917] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2919] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2927] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2941] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2945] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2946] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2953] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.2973] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.3010] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.3011] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 04:35:00 np0005549474 NetworkManager[49051]: <info>  [1765100100.3018] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 04:35:01 np0005549474 NetworkManager[49051]: <info>  [1765100101.4443] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51828 uid=0 result="success"
Dec  7 04:35:01 np0005549474 NetworkManager[49051]: <info>  [1765100101.6326] checkpoint[0x55ea8ebee950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  7 04:35:01 np0005549474 NetworkManager[49051]: <info>  [1765100101.6328] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51828 uid=0 result="success"
Dec  7 04:35:01 np0005549474 python3.9[52187]: ansible-ansible.legacy.async_status Invoked with jid=j66511196254.51822 mode=status _async_dir=/root/.ansible_async
Dec  7 04:35:01 np0005549474 NetworkManager[49051]: <info>  [1765100101.9862] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51828 uid=0 result="success"
Dec  7 04:35:01 np0005549474 NetworkManager[49051]: <info>  [1765100101.9881] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51828 uid=0 result="success"
Dec  7 04:35:02 np0005549474 NetworkManager[49051]: <info>  [1765100102.1989] audit: op="networking-control" arg="global-dns-configuration" pid=51828 uid=0 result="success"
Dec  7 04:35:02 np0005549474 NetworkManager[49051]: <info>  [1765100102.2016] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  7 04:35:02 np0005549474 NetworkManager[49051]: <info>  [1765100102.2045] audit: op="networking-control" arg="global-dns-configuration" pid=51828 uid=0 result="success"
Dec  7 04:35:02 np0005549474 NetworkManager[49051]: <info>  [1765100102.2073] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51828 uid=0 result="success"
Dec  7 04:35:02 np0005549474 NetworkManager[49051]: <info>  [1765100102.3593] checkpoint[0x55ea8ebeea20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  7 04:35:02 np0005549474 NetworkManager[49051]: <info>  [1765100102.3603] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51828 uid=0 result="success"
Dec  7 04:35:02 np0005549474 ansible-async_wrapper.py[51826]: Module complete (51826)
Dec  7 04:35:02 np0005549474 ansible-async_wrapper.py[51825]: Done in kid B.
Dec  7 04:35:05 np0005549474 python3.9[52295]: ansible-ansible.legacy.async_status Invoked with jid=j66511196254.51822 mode=status _async_dir=/root/.ansible_async
Dec  7 04:35:06 np0005549474 python3.9[52394]: ansible-ansible.legacy.async_status Invoked with jid=j66511196254.51822 mode=cleanup _async_dir=/root/.ansible_async
Dec  7 04:35:06 np0005549474 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 04:35:06 np0005549474 python3.9[52548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:35:07 np0005549474 python3.9[52671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100106.4028711-926-138121670959368/.source.returncode _original_basename=.wp7ej7bv follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:35:08 np0005549474 python3.9[52823]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:35:08 np0005549474 python3.9[52947]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100107.9062552-974-132470545245785/.source.cfg _original_basename=.dpzaem8q follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:35:10 np0005549474 python3.9[53099]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:35:10 np0005549474 systemd[1]: Reloading Network Manager...
Dec  7 04:35:10 np0005549474 NetworkManager[49051]: <info>  [1765100110.0691] audit: op="reload" arg="0" pid=53103 uid=0 result="success"
Dec  7 04:35:10 np0005549474 NetworkManager[49051]: <info>  [1765100110.0696] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  7 04:35:10 np0005549474 systemd[1]: Reloaded Network Manager.
Dec  7 04:35:10 np0005549474 systemd-logind[796]: Session 10 logged out. Waiting for processes to exit.
Dec  7 04:35:10 np0005549474 systemd[1]: session-10.scope: Deactivated successfully.
Dec  7 04:35:10 np0005549474 systemd[1]: session-10.scope: Consumed 48.875s CPU time.
Dec  7 04:35:10 np0005549474 systemd-logind[796]: Removed session 10.
Dec  7 04:35:16 np0005549474 systemd-logind[796]: New session 11 of user zuul.
Dec  7 04:35:16 np0005549474 systemd[1]: Started Session 11 of User zuul.
Dec  7 04:35:17 np0005549474 python3.9[53287]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:35:18 np0005549474 python3.9[53441]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:35:19 np0005549474 python3.9[53635]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:35:20 np0005549474 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 04:35:20 np0005549474 systemd[1]: session-11.scope: Deactivated successfully.
Dec  7 04:35:20 np0005549474 systemd[1]: session-11.scope: Consumed 2.270s CPU time.
Dec  7 04:35:20 np0005549474 systemd-logind[796]: Session 11 logged out. Waiting for processes to exit.
Dec  7 04:35:20 np0005549474 systemd-logind[796]: Removed session 11.
Dec  7 04:35:25 np0005549474 systemd-logind[796]: New session 12 of user zuul.
Dec  7 04:35:25 np0005549474 systemd[1]: Started Session 12 of User zuul.
Dec  7 04:35:26 np0005549474 python3.9[53818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:35:27 np0005549474 python3.9[53973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:35:29 np0005549474 python3.9[54129]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:35:29 np0005549474 python3.9[54213]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:35:32 np0005549474 python3.9[54367]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:35:33 np0005549474 python3.9[54562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:35:34 np0005549474 python3.9[54714]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:35:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-compat3393750538-merged.mount: Deactivated successfully.
Dec  7 04:35:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1757474440-merged.mount: Deactivated successfully.
Dec  7 04:35:34 np0005549474 podman[54715]: 2025-12-07 09:35:34.396626883 +0000 UTC m=+0.050299333 system refresh
Dec  7 04:35:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:35:35 np0005549474 python3.9[54878]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:35:36 np0005549474 python3.9[55001]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100134.7735589-197-2972664702304/.source.json follow=False _original_basename=podman_network_config.j2 checksum=124ccdf5ff7ce4b39d3413e9dd44270e05b3b31b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:35:36 np0005549474 python3.9[55153]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:35:37 np0005549474 python3.9[55276]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765100136.5019307-242-250292948955413/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:35:38 np0005549474 python3.9[55428]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:35:39 np0005549474 python3.9[55580]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:35:39 np0005549474 python3.9[55732]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:35:40 np0005549474 python3.9[55884]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:35:41 np0005549474 python3.9[56036]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:35:43 np0005549474 python3.9[56189]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:35:44 np0005549474 python3.9[56343]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:35:45 np0005549474 python3.9[56495]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:35:46 np0005549474 python3.9[56647]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:35:47 np0005549474 python3.9[56800]: ansible-service_facts Invoked
Dec  7 04:35:47 np0005549474 network[56817]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:35:47 np0005549474 network[56818]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:35:47 np0005549474 network[56819]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:35:53 np0005549474 python3.9[57273]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:35:56 np0005549474 python3.9[57428]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  7 04:35:58 np0005549474 python3.9[57580]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:35:58 np0005549474 python3.9[57705]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100157.5215263-674-257450299646084/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:35:59 np0005549474 python3.9[57859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:00 np0005549474 python3.9[57984]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100159.0889943-719-181133972859100/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:02 np0005549474 python3.9[58138]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:03 np0005549474 python3.9[58292]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:36:04 np0005549474 python3.9[58376]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:36:06 np0005549474 python3.9[58530]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:36:06 np0005549474 python3.9[58614]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:36:06 np0005549474 chronyd[805]: chronyd exiting
Dec  7 04:36:06 np0005549474 systemd[1]: Stopping NTP client/server...
Dec  7 04:36:06 np0005549474 systemd[1]: chronyd.service: Deactivated successfully.
Dec  7 04:36:06 np0005549474 systemd[1]: Stopped NTP client/server.
Dec  7 04:36:06 np0005549474 systemd[1]: Starting NTP client/server...
Dec  7 04:36:07 np0005549474 chronyd[58622]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  7 04:36:07 np0005549474 chronyd[58622]: Frequency -26.124 +/- 0.860 ppm read from /var/lib/chrony/drift
Dec  7 04:36:07 np0005549474 chronyd[58622]: Loaded seccomp filter (level 2)
Dec  7 04:36:07 np0005549474 systemd[1]: Started NTP client/server.
Dec  7 04:36:07 np0005549474 systemd[1]: session-12.scope: Deactivated successfully.
Dec  7 04:36:07 np0005549474 systemd[1]: session-12.scope: Consumed 25.783s CPU time.
Dec  7 04:36:07 np0005549474 systemd-logind[796]: Session 12 logged out. Waiting for processes to exit.
Dec  7 04:36:07 np0005549474 systemd-logind[796]: Removed session 12.
Dec  7 04:36:13 np0005549474 systemd-logind[796]: New session 13 of user zuul.
Dec  7 04:36:13 np0005549474 systemd[1]: Started Session 13 of User zuul.
Dec  7 04:36:14 np0005549474 python3.9[58803]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:15 np0005549474 python3.9[58955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:16 np0005549474 python3.9[59078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100175.0139356-62-54452221729665/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:16 np0005549474 systemd[1]: session-13.scope: Deactivated successfully.
Dec  7 04:36:16 np0005549474 systemd[1]: session-13.scope: Consumed 1.919s CPU time.
Dec  7 04:36:16 np0005549474 systemd-logind[796]: Session 13 logged out. Waiting for processes to exit.
Dec  7 04:36:16 np0005549474 systemd-logind[796]: Removed session 13.
Dec  7 04:36:22 np0005549474 systemd-logind[796]: New session 14 of user zuul.
Dec  7 04:36:22 np0005549474 systemd[1]: Started Session 14 of User zuul.
Dec  7 04:36:23 np0005549474 python3.9[59256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:36:24 np0005549474 python3.9[59412]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:25 np0005549474 python3.9[59587]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:26 np0005549474 python3.9[59710]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765100184.968612-83-27547553117773/.source.json _original_basename=.l9mvyb__ follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:27 np0005549474 python3.9[59862]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:28 np0005549474 python3.9[59985]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100187.0969195-152-44978064507217/.source _original_basename=.2spisusx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:29 np0005549474 python3.9[60137]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:36:29 np0005549474 python3.9[60289]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:30 np0005549474 python3.9[60412]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765100189.323374-224-253233136439119/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:36:31 np0005549474 python3.9[60564]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:31 np0005549474 python3.9[60687]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765100190.7617865-224-175722399022187/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:36:32 np0005549474 python3.9[60839]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:33 np0005549474 python3.9[60991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:33 np0005549474 python3.9[61114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100192.8479116-335-21198263417929/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:34 np0005549474 python3.9[61266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:35 np0005549474 python3.9[61389]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100194.3256755-380-11468370441995/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:36 np0005549474 python3.9[61541]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:36:36 np0005549474 systemd[1]: Reloading.
Dec  7 04:36:36 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:36:36 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:36:37 np0005549474 systemd[1]: Reloading.
Dec  7 04:36:37 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:36:37 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:36:37 np0005549474 systemd[1]: Starting EDPM Container Shutdown...
Dec  7 04:36:37 np0005549474 systemd[1]: Finished EDPM Container Shutdown.
Dec  7 04:36:38 np0005549474 python3.9[61770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:38 np0005549474 python3.9[61893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100197.7325585-449-165400541832834/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:39 np0005549474 python3.9[62045]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:40 np0005549474 python3.9[62168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100199.1923306-494-202661089490154/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:41 np0005549474 python3.9[62320]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:36:41 np0005549474 systemd[1]: Reloading.
Dec  7 04:36:41 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:36:41 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:36:41 np0005549474 systemd[1]: Reloading.
Dec  7 04:36:41 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:36:41 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:36:41 np0005549474 systemd[1]: Starting Create netns directory...
Dec  7 04:36:41 np0005549474 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  7 04:36:41 np0005549474 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  7 04:36:41 np0005549474 systemd[1]: Finished Create netns directory.
Dec  7 04:36:42 np0005549474 python3.9[62545]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:36:42 np0005549474 network[62562]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:36:42 np0005549474 network[62563]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:36:42 np0005549474 network[62564]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:36:49 np0005549474 python3.9[62827]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:36:49 np0005549474 systemd[1]: Reloading.
Dec  7 04:36:49 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:36:49 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:36:49 np0005549474 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  7 04:36:50 np0005549474 iptables.init[62868]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  7 04:36:50 np0005549474 iptables.init[62868]: iptables: Flushing firewall rules: [  OK  ]
Dec  7 04:36:50 np0005549474 systemd[1]: iptables.service: Deactivated successfully.
Dec  7 04:36:50 np0005549474 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  7 04:36:51 np0005549474 python3.9[63064]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:36:52 np0005549474 python3.9[63218]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:36:52 np0005549474 systemd[1]: Reloading.
Dec  7 04:36:52 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:36:52 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:36:52 np0005549474 systemd[1]: Starting Netfilter Tables...
Dec  7 04:36:52 np0005549474 systemd[1]: Finished Netfilter Tables.
Dec  7 04:36:53 np0005549474 python3.9[63411]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:36:54 np0005549474 python3.9[63564]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:55 np0005549474 python3.9[63689]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100214.2442327-701-251197943019009/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:56 np0005549474 python3.9[63842]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:36:56 np0005549474 systemd[1]: Reloading OpenSSH server daemon...
Dec  7 04:36:56 np0005549474 systemd[1]: Reloaded OpenSSH server daemon.
Dec  7 04:36:57 np0005549474 python3.9[63998]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:57 np0005549474 python3.9[64150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:36:58 np0005549474 python3.9[64273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100217.3206718-794-7252879429197/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:36:59 np0005549474 python3.9[64425]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  7 04:36:59 np0005549474 systemd[1]: Starting Time & Date Service...
Dec  7 04:36:59 np0005549474 systemd[1]: Started Time & Date Service.
Dec  7 04:37:00 np0005549474 python3.9[64581]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:01 np0005549474 python3.9[64733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:02 np0005549474 python3.9[64858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100221.1668987-899-270786235487571/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:03 np0005549474 python3.9[65010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:04 np0005549474 python3.9[65133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100222.761615-944-66238360165794/.source.yaml _original_basename=.px61jheg follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:04 np0005549474 python3.9[65285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:05 np0005549474 python3.9[65408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100224.266142-989-93814039441255/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:06 np0005549474 python3.9[65561]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:07 np0005549474 python3.9[65714]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:08 np0005549474 python3[65867]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  7 04:37:09 np0005549474 python3.9[66019]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:09 np0005549474 python3.9[66144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100228.361531-1106-78810526200640/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:10 np0005549474 python3.9[66296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:11 np0005549474 python3.9[66419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100229.979481-1151-146681904581959/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:12 np0005549474 python3.9[66571]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:12 np0005549474 python3.9[66694]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100231.5027673-1196-138214026505910/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:13 np0005549474 python3.9[66846]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:14 np0005549474 python3.9[66969]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100232.9360278-1241-178326671859276/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:15 np0005549474 python3.9[67121]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:37:15 np0005549474 python3.9[67244]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100234.3733065-1286-75692799337269/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:16 np0005549474 python3.9[67396]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:17 np0005549474 python3.9[67548]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:18 np0005549474 python3.9[67707]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:19 np0005549474 python3.9[67860]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:20 np0005549474 python3.9[68012]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:21 np0005549474 python3.9[68164]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  7 04:37:21 np0005549474 python3.9[68317]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  7 04:37:22 np0005549474 systemd[1]: session-14.scope: Deactivated successfully.
Dec  7 04:37:22 np0005549474 systemd[1]: session-14.scope: Consumed 39.536s CPU time.
Dec  7 04:37:22 np0005549474 systemd-logind[796]: Session 14 logged out. Waiting for processes to exit.
Dec  7 04:37:22 np0005549474 systemd-logind[796]: Removed session 14.
Dec  7 04:37:28 np0005549474 systemd-logind[796]: New session 15 of user zuul.
Dec  7 04:37:28 np0005549474 systemd[1]: Started Session 15 of User zuul.
Dec  7 04:37:29 np0005549474 python3.9[68498]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  7 04:37:29 np0005549474 python3.9[68650]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:37:30 np0005549474 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  7 04:37:31 np0005549474 python3.9[68804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:37:32 np0005549474 python3.9[68956]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDztIgdvWfbGcTBsnJ/M+7HPF8fmQq/y+Bl35+zFajL3KlZAwT5Jrd0wBJFCENJp3TXe2vCz5X1q7WE7KkTCmfFoRuHmoqlZhTqT9s/+r8kiDatZiqCOWaKW4t/5FdXKBIVPlkry4+jUtXum7Hjaqx3CWAN9zTBaMGorSAA8LKMMvZPP0EYbAxaLgivTJ1mbZF0/ZNGo/5WQc2vAa9bAToTb0YwrajhjGwm8gpS1t7deqebzgprT7jWeXpxQZEVS/ynyQFICZ5W6covXVgsWgQNtfbmweGFQOMlP0vZE1/P3GUjWJgmaVsDrNDWdjCgiaRAZnNCC01eZyUjas+eot7B1Sg0BLS3JeORj3tIRcVI9DkuMQCdex5q/BCiz8YueUZn4qIiyvmG1max5Xui0X1LygXyNdyBWs5DbBGfPsFBLyXT1noEfYsgk5v0iu8DLl+PShKLO8xLqJMeYVYsUY8uG6qv+lA0YbVeiMomYLVXMABowwzcwzKHnlj5f+keT0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAU0KXuEPsaXKf0jGICVhewmjwEgAqPrkc4waZyQc7o#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBUF894VPJUzj6uHFODSSpNciOlDtn3PuhA44yhVzfkk/lOehkynDHVgBX6zwUYnOmiLJE7vHinKqWzoAVHhOas=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCip7MnZvuJx8DmLIGnIc8NcND4H8xH1hog1PQWG+WFQHEqpA3BpOSGhk8Mr5skxXappecIladNg73ReINM2gE58XsvsHhQICeXuRBK091YtVSafixD3fEvhD+xGUIukp3F6EPKU0x4WQ0xWQC38o13OyZtGRApI6AQEAxg0QMsB7qwwroH6ag7l7U4sv5nYqK3upInbblwL0LYfo6jyhHnhwZBVjv2MTJ8zZktF54SlM68fh8WQwQbA7VMqK6wEJlDRkdsIXPbq2PN6V08KJlBkBlvgXu5aTIeGQ5DdFuKQutnMEWlwiCtoJNly6Pv7PwjZnDKkPQP5RamELk/eKCRHXY5SbfmyG9VtAHHEV2f9NsjnFZRBx9ikx/H6/NpPmlMji5VbyfY1b0u0DreNZqm2bDWRcL++rsjZDfWqh2cJOF4Jan0m12bfjWDBXeGiunpl4XWydA0nbi0v4RHvH6pD2BoTuxC2rVSR233WC88Xe5HU1WoXegIy43ksMeFvGs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYukxfCIA1Xurqi7GbVHfVTkzw++ujxQPgfwUA9AznN#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOT58aEV4d46XVVznwJYUJL8kuqtWeT85ng6XRArVPbONJirV0BPyfS1SwB7SxPwywavSEowgTdPM8QvrYiA0kE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVimUYmVq1jwN4I5i4nI9XPpovC84bLnjioQY6MxnDdHWaEfuEub8qpNrfkTCFppybs82dXQEl9witk6tAj8GQQGfFN/IfI+GFHby5G2bWpOumixFRFVkhc3QW9inlnJNA0TMzwlbz5LOkL9/ShhCpshMnBGNjKJFaH5GvlqpWCYYAotq1zbwd6SRIu4O5cPa3+7mFmXKtlFl28oAFp3NMsNJ9wbIWhXeOcfUSNbrL52O30C6TKW8HiBC2kfg578bm0Pa6r2iMvPHhW7kMm5eQwUfB5l5JKgIsDJmaKjLej/4U7hO52yut7hfnV3O8qK0ZpD2xEwhe9OneH4tKueT63SehDENUIJWAasPiPrlHWkfm6PWhKwPMBu3Vuir/4R1SA6ZIJEzQeGq/nUuSBtbDZC4jDuXb8oywpR/uCaBgZbziPhqBMIegQDMvKeQGQmZn6V+eKkfv3I9Z83LbQRXEnIWiuf4XRp1btGZYv0+Q7zgiD+dw9QxCgWkdWxA9SoM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEWDyTOT2SMCqj8YwhAvKshXrBfGOObG4cDM9r5B2FZj#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJuP9cUBko1m6+714/2inXnWXQqIN7Sx7/A0GBQAjM8bAkICVNXZtk9Pu38lY43gxHx3nZ57o3Dpp2ak8tsjrR4=#012 create=True mode=0644 path=/tmp/ansible.8sniprvi state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:33 np0005549474 python3.9[69108]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8sniprvi' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:33 np0005549474 python3.9[69262]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8sniprvi state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:34 np0005549474 systemd[1]: session-15.scope: Deactivated successfully.
Dec  7 04:37:34 np0005549474 systemd[1]: session-15.scope: Consumed 3.710s CPU time.
Dec  7 04:37:34 np0005549474 systemd-logind[796]: Session 15 logged out. Waiting for processes to exit.
Dec  7 04:37:34 np0005549474 systemd-logind[796]: Removed session 15.
Dec  7 04:37:39 np0005549474 systemd-logind[796]: New session 16 of user zuul.
Dec  7 04:37:39 np0005549474 systemd[1]: Started Session 16 of User zuul.
Dec  7 04:37:40 np0005549474 python3.9[69442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:37:41 np0005549474 python3.9[69599]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  7 04:37:42 np0005549474 python3.9[69753]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:37:43 np0005549474 python3.9[69906]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:44 np0005549474 python3.9[70059]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:37:45 np0005549474 python3.9[70213]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:46 np0005549474 python3.9[70368]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:37:46 np0005549474 systemd[1]: session-16.scope: Deactivated successfully.
Dec  7 04:37:46 np0005549474 systemd[1]: session-16.scope: Consumed 5.026s CPU time.
Dec  7 04:37:46 np0005549474 systemd-logind[796]: Session 16 logged out. Waiting for processes to exit.
Dec  7 04:37:46 np0005549474 systemd-logind[796]: Removed session 16.
Dec  7 04:37:52 np0005549474 systemd-logind[796]: New session 17 of user zuul.
Dec  7 04:37:52 np0005549474 systemd[1]: Started Session 17 of User zuul.
Dec  7 04:37:53 np0005549474 python3.9[70547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:37:54 np0005549474 python3.9[70703]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:37:55 np0005549474 python3.9[70787]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  7 04:37:57 np0005549474 python3.9[70938]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:37:59 np0005549474 python3.9[71089]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 04:38:00 np0005549474 python3.9[71239]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:38:00 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:38:00 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:38:00 np0005549474 python3.9[71390]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:38:01 np0005549474 systemd[1]: session-17.scope: Deactivated successfully.
Dec  7 04:38:01 np0005549474 systemd[1]: session-17.scope: Consumed 6.387s CPU time.
Dec  7 04:38:01 np0005549474 systemd-logind[796]: Session 17 logged out. Waiting for processes to exit.
Dec  7 04:38:01 np0005549474 systemd-logind[796]: Removed session 17.
Dec  7 04:38:10 np0005549474 systemd-logind[796]: New session 18 of user zuul.
Dec  7 04:38:10 np0005549474 systemd[1]: Started Session 18 of User zuul.
Dec  7 04:38:15 np0005549474 chronyd[58622]: Selected source 162.159.200.1 (pool.ntp.org)
Dec  7 04:38:16 np0005549474 python3[72159]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:38:18 np0005549474 python3[72254]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 04:38:19 np0005549474 python3[72281]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:38:19 np0005549474 python3[72307]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:38:19 np0005549474 kernel: loop: module loaded
Dec  7 04:38:20 np0005549474 kernel: loop3: detected capacity change from 0 to 41943040
Dec  7 04:38:20 np0005549474 python3[72341]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:38:20 np0005549474 lvm[72344]: PV /dev/loop3 not used.
Dec  7 04:38:20 np0005549474 lvm[72353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:38:20 np0005549474 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  7 04:38:20 np0005549474 lvm[72355]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  7 04:38:20 np0005549474 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  7 04:38:21 np0005549474 python3[72433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:38:21 np0005549474 python3[72506]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100300.9282563-36824-168016982005714/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:38:22 np0005549474 python3[72556]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:38:22 np0005549474 systemd[1]: Reloading.
Dec  7 04:38:22 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:38:22 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:38:22 np0005549474 systemd[1]: Starting Ceph OSD losetup...
Dec  7 04:38:22 np0005549474 bash[72595]: /dev/loop3: [64513]:4327950 (/var/lib/ceph-osd-0.img)
Dec  7 04:38:22 np0005549474 systemd[1]: Finished Ceph OSD losetup.
Dec  7 04:38:22 np0005549474 lvm[72596]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:38:22 np0005549474 lvm[72596]: VG ceph_vg0 finished
Dec  7 04:38:24 np0005549474 python3[72621]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:38:27 np0005549474 python3[72714]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 04:38:29 np0005549474 python3[72773]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 04:38:32 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:38:32 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:38:33 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:38:33 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:38:33 np0005549474 systemd[1]: run-r8b53917e92254eab8018e60c03d449b8.service: Deactivated successfully.
Dec  7 04:38:33 np0005549474 python3[72890]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:38:33 np0005549474 python3[72918]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:38:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:38:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:38:34 np0005549474 python3[72983]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:38:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:38:35 np0005549474 python3[73009]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:38:35 np0005549474 python3[73087]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:38:36 np0005549474 python3[73160]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100315.4877775-37016-92082935644508/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:38:37 np0005549474 python3[73262]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:38:37 np0005549474 python3[73335]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100316.786088-37034-166573675893471/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:38:37 np0005549474 python3[73385]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:38:38 np0005549474 python3[73413]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:38:38 np0005549474 python3[73441]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:38:38 np0005549474 python3[73469]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:38:39 np0005549474 systemd-logind[796]: New session 19 of user ceph-admin.
Dec  7 04:38:39 np0005549474 systemd[1]: Created slice User Slice of UID 42477.
Dec  7 04:38:39 np0005549474 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  7 04:38:39 np0005549474 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  7 04:38:39 np0005549474 systemd[1]: Starting User Manager for UID 42477...
Dec  7 04:38:39 np0005549474 systemd[73477]: Queued start job for default target Main User Target.
Dec  7 04:38:39 np0005549474 systemd[73477]: Created slice User Application Slice.
Dec  7 04:38:39 np0005549474 systemd[73477]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 04:38:39 np0005549474 systemd[73477]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 04:38:39 np0005549474 systemd[73477]: Reached target Paths.
Dec  7 04:38:39 np0005549474 systemd[73477]: Reached target Timers.
Dec  7 04:38:39 np0005549474 systemd[73477]: Starting D-Bus User Message Bus Socket...
Dec  7 04:38:39 np0005549474 systemd[73477]: Starting Create User's Volatile Files and Directories...
Dec  7 04:38:39 np0005549474 systemd[73477]: Listening on D-Bus User Message Bus Socket.
Dec  7 04:38:39 np0005549474 systemd[73477]: Reached target Sockets.
Dec  7 04:38:39 np0005549474 systemd[73477]: Finished Create User's Volatile Files and Directories.
Dec  7 04:38:39 np0005549474 systemd[73477]: Reached target Basic System.
Dec  7 04:38:39 np0005549474 systemd[73477]: Reached target Main User Target.
Dec  7 04:38:39 np0005549474 systemd[73477]: Startup finished in 150ms.
Dec  7 04:38:39 np0005549474 systemd[1]: Started User Manager for UID 42477.
Dec  7 04:38:39 np0005549474 systemd[1]: Started Session 19 of User ceph-admin.
Dec  7 04:38:39 np0005549474 systemd[1]: session-19.scope: Deactivated successfully.
Dec  7 04:38:39 np0005549474 systemd-logind[796]: Session 19 logged out. Waiting for processes to exit.
Dec  7 04:38:39 np0005549474 systemd-logind[796]: Removed session 19.
Dec  7 04:38:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:38:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:38:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-compat2225870221-lower\x2dmapped.mount: Deactivated successfully.
Dec  7 04:38:49 np0005549474 systemd[1]: Stopping User Manager for UID 42477...
Dec  7 04:38:49 np0005549474 systemd[73477]: Activating special unit Exit the Session...
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped target Main User Target.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped target Basic System.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped target Paths.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped target Sockets.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped target Timers.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  7 04:38:49 np0005549474 systemd[73477]: Closed D-Bus User Message Bus Socket.
Dec  7 04:38:49 np0005549474 systemd[73477]: Stopped Create User's Volatile Files and Directories.
Dec  7 04:38:49 np0005549474 systemd[73477]: Removed slice User Application Slice.
Dec  7 04:38:49 np0005549474 systemd[73477]: Reached target Shutdown.
Dec  7 04:38:49 np0005549474 systemd[73477]: Finished Exit the Session.
Dec  7 04:38:49 np0005549474 systemd[73477]: Reached target Exit the Session.
Dec  7 04:38:49 np0005549474 systemd[1]: user@42477.service: Deactivated successfully.
Dec  7 04:38:49 np0005549474 systemd[1]: Stopped User Manager for UID 42477.
Dec  7 04:38:49 np0005549474 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  7 04:38:49 np0005549474 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  7 04:38:49 np0005549474 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  7 04:38:49 np0005549474 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  7 04:38:49 np0005549474 systemd[1]: Removed slice User Slice of UID 42477.
Dec  7 04:38:59 np0005549474 podman[73570]: 2025-12-07 09:38:59.035177592 +0000 UTC m=+19.148455678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:38:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:38:59 np0005549474 podman[73629]: 2025-12-07 09:38:59.126532491 +0000 UTC m=+0.070877388 container create adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e (image=quay.io/ceph/ceph:v19, name=festive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:38:59 np0005549474 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  7 04:38:59 np0005549474 systemd[1]: Started libpod-conmon-adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e.scope.
Dec  7 04:38:59 np0005549474 podman[73629]: 2025-12-07 09:38:59.097469292 +0000 UTC m=+0.041814289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:38:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:38:59 np0005549474 podman[73629]: 2025-12-07 09:38:59.259500072 +0000 UTC m=+0.203844979 container init adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e (image=quay.io/ceph/ceph:v19, name=festive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec  7 04:38:59 np0005549474 podman[73629]: 2025-12-07 09:38:59.268240314 +0000 UTC m=+0.212585221 container start adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e (image=quay.io/ceph/ceph:v19, name=festive_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:38:59 np0005549474 podman[73629]: 2025-12-07 09:38:59.272331562 +0000 UTC m=+0.216676499 container attach adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e (image=quay.io/ceph/ceph:v19, name=festive_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:38:59 np0005549474 festive_pike[73645]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  7 04:38:59 np0005549474 systemd[1]: libpod-adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e.scope: Deactivated successfully.
Dec  7 04:38:59 np0005549474 podman[73650]: 2025-12-07 09:38:59.409929866 +0000 UTC m=+0.027101119 container died adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e (image=quay.io/ceph/ceph:v19, name=festive_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:38:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c5b0a7602bc373e8500b69ff21f0ca09d6c5efde087d276fddf22859f4507094-merged.mount: Deactivated successfully.
Dec  7 04:38:59 np0005549474 podman[73650]: 2025-12-07 09:38:59.555424919 +0000 UTC m=+0.172596142 container remove adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e (image=quay.io/ceph/ceph:v19, name=festive_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:38:59 np0005549474 systemd[1]: libpod-conmon-adf28bc3bdc6489566223cfadb384e4385cdbead7fd1845865e89e681133055e.scope: Deactivated successfully.
Dec  7 04:38:59 np0005549474 podman[73665]: 2025-12-07 09:38:59.624529748 +0000 UTC m=+0.026733378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:38:59 np0005549474 podman[73665]: 2025-12-07 09:38:59.944615374 +0000 UTC m=+0.346818994 container create a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa (image=quay.io/ceph/ceph:v19, name=eloquent_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:39:00 np0005549474 systemd[1]: Started libpod-conmon-a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa.scope.
Dec  7 04:39:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:00 np0005549474 podman[73665]: 2025-12-07 09:39:00.052801799 +0000 UTC m=+0.455005419 container init a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa (image=quay.io/ceph/ceph:v19, name=eloquent_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:00 np0005549474 podman[73665]: 2025-12-07 09:39:00.057889704 +0000 UTC m=+0.460093294 container start a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa (image=quay.io/ceph/ceph:v19, name=eloquent_black, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:00 np0005549474 eloquent_black[73680]: 167 167
Dec  7 04:39:00 np0005549474 systemd[1]: libpod-a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa.scope: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73665]: 2025-12-07 09:39:00.061795517 +0000 UTC m=+0.463999137 container attach a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa (image=quay.io/ceph/ceph:v19, name=eloquent_black, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:00 np0005549474 podman[73665]: 2025-12-07 09:39:00.063641086 +0000 UTC m=+0.465844676 container died a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa (image=quay.io/ceph/ceph:v19, name=eloquent_black, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:39:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay-eeaf5d799cfe0624518e0dc7d64b22a48d465423da82084c1dfb3396ce3632a2-merged.mount: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73665]: 2025-12-07 09:39:00.252665421 +0000 UTC m=+0.654869051 container remove a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa (image=quay.io/ceph/ceph:v19, name=eloquent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:39:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:00 np0005549474 systemd[1]: libpod-conmon-a0e098574e6a8fb1112b031d265dba005410575f7e58fd669a6ba7b6d282a8aa.scope: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.333890582 +0000 UTC m=+0.052624434 container create 988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:00 np0005549474 systemd[1]: Started libpod-conmon-988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a.scope.
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.30435808 +0000 UTC m=+0.023092002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.431914687 +0000 UTC m=+0.150648579 container init 988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.441366758 +0000 UTC m=+0.160100630 container start 988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.449283588 +0000 UTC m=+0.168017520 container attach 988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:00 np0005549474 affectionate_ellis[73716]: AQA0SzVp+qzIGxAA4WehTqJ44F8TXHphDtqIvw==
Dec  7 04:39:00 np0005549474 systemd[1]: libpod-988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a.scope: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.470390246 +0000 UTC m=+0.189124118 container died 988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 04:39:00 np0005549474 podman[73699]: 2025-12-07 09:39:00.535590003 +0000 UTC m=+0.254323885 container remove 988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 04:39:00 np0005549474 systemd[1]: libpod-conmon-988ce90b8835ba6c429d1b470bdcfdb07d6e3018867847b1c69e243ccc167a2a.scope: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.638814096 +0000 UTC m=+0.070084587 container create 1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0 (image=quay.io/ceph/ceph:v19, name=intelligent_hermann, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:39:00 np0005549474 systemd[1]: Started libpod-conmon-1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0.scope.
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.608837923 +0000 UTC m=+0.040108464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.72807124 +0000 UTC m=+0.159341771 container init 1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0 (image=quay.io/ceph/ceph:v19, name=intelligent_hermann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.733980446 +0000 UTC m=+0.165250937 container start 1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0 (image=quay.io/ceph/ceph:v19, name=intelligent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.738668141 +0000 UTC m=+0.169938622 container attach 1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0 (image=quay.io/ceph/ceph:v19, name=intelligent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:39:00 np0005549474 intelligent_hermann[73751]: AQA0SzVp58XmLRAAIH6hH4W4VclDWEBG+8gATw==
Dec  7 04:39:00 np0005549474 systemd[1]: libpod-1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0.scope: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.774084729 +0000 UTC m=+0.205355190 container died 1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0 (image=quay.io/ceph/ceph:v19, name=intelligent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:00 np0005549474 podman[73735]: 2025-12-07 09:39:00.816080921 +0000 UTC m=+0.247351412 container remove 1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0 (image=quay.io/ceph/ceph:v19, name=intelligent_hermann, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:39:00 np0005549474 systemd[1]: libpod-conmon-1f09e71812f0a801809b609c6680160b1d0883e55193c1a073daea0fb62ec7e0.scope: Deactivated successfully.
Dec  7 04:39:00 np0005549474 podman[73769]: 2025-12-07 09:39:00.891374664 +0000 UTC m=+0.053920839 container create 47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1 (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 04:39:00 np0005549474 systemd[1]: Started libpod-conmon-47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1.scope.
Dec  7 04:39:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:00 np0005549474 podman[73769]: 2025-12-07 09:39:00.862960932 +0000 UTC m=+0.025507137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:01 np0005549474 podman[73769]: 2025-12-07 09:39:01.335142145 +0000 UTC m=+0.497688340 container init 47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1 (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:39:01 np0005549474 podman[73769]: 2025-12-07 09:39:01.345123289 +0000 UTC m=+0.507669454 container start 47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1 (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:01 np0005549474 quizzical_mclean[73785]: AQA1SzVpGiLrFhAAlXeNiXDaTDTGuOXywQv26w==
Dec  7 04:39:01 np0005549474 systemd[1]: libpod-47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1.scope: Deactivated successfully.
Dec  7 04:39:01 np0005549474 podman[73769]: 2025-12-07 09:39:01.583411699 +0000 UTC m=+0.745957894 container attach 47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1 (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:01 np0005549474 podman[73769]: 2025-12-07 09:39:01.584103338 +0000 UTC m=+0.746649523 container died 47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1 (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:39:02 np0005549474 systemd[1]: var-lib-containers-storage-overlay-cd0b0b1eaaf26ec19120f008d1d70e95fdd1c3714b2953837ae223db22ec3eb2-merged.mount: Deactivated successfully.
Dec  7 04:39:05 np0005549474 podman[73769]: 2025-12-07 09:39:05.13709813 +0000 UTC m=+4.299644295 container remove 47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1 (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:39:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:05 np0005549474 systemd[1]: libpod-conmon-47ceecae635cfdecdba96608be76ed856730d245563bec379493c697c2fd5ed1.scope: Deactivated successfully.
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.205908443 +0000 UTC m=+0.048405454 container create c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2 (image=quay.io/ceph/ceph:v19, name=trusting_brahmagupta, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:05 np0005549474 systemd[1]: Started libpod-conmon-c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2.scope.
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.179697398 +0000 UTC m=+0.022194409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cafaa2a4e1f01356a91a356c747152695123718322ff8a0558f5314b4a206564/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.346395963 +0000 UTC m=+0.188892954 container init c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2 (image=quay.io/ceph/ceph:v19, name=trusting_brahmagupta, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.351289601 +0000 UTC m=+0.193786632 container start c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2 (image=quay.io/ceph/ceph:v19, name=trusting_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.356923931 +0000 UTC m=+0.199420922 container attach c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2 (image=quay.io/ceph/ceph:v19, name=trusting_brahmagupta, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:39:05 np0005549474 trusting_brahmagupta[73824]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  7 04:39:05 np0005549474 trusting_brahmagupta[73824]: setting min_mon_release = quincy
Dec  7 04:39:05 np0005549474 trusting_brahmagupta[73824]: /usr/bin/monmaptool: set fsid to 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:05 np0005549474 trusting_brahmagupta[73824]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  7 04:39:05 np0005549474 systemd[1]: libpod-c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2.scope: Deactivated successfully.
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.390880591 +0000 UTC m=+0.233377612 container died c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2 (image=quay.io/ceph/ceph:v19, name=trusting_brahmagupta, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:05 np0005549474 podman[73807]: 2025-12-07 09:39:05.471981048 +0000 UTC m=+0.314478039 container remove c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2 (image=quay.io/ceph/ceph:v19, name=trusting_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 04:39:05 np0005549474 systemd[1]: libpod-conmon-c232f45c4fd90f38ce1386c3b2181c520a888a392b9974a3fc1294a93fc952d2.scope: Deactivated successfully.
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.572907771 +0000 UTC m=+0.068642189 container create 53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff (image=quay.io/ceph/ceph:v19, name=trusting_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:39:05 np0005549474 systemd[1]: Started libpod-conmon-53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff.scope.
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.542714441 +0000 UTC m=+0.038448919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a403fc2422a9af290e0ebae53d8fdb3f9c0526932e98527be002814bd41759/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a403fc2422a9af290e0ebae53d8fdb3f9c0526932e98527be002814bd41759/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a403fc2422a9af290e0ebae53d8fdb3f9c0526932e98527be002814bd41759/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a403fc2422a9af290e0ebae53d8fdb3f9c0526932e98527be002814bd41759/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.6725906 +0000 UTC m=+0.168325088 container init 53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff (image=quay.io/ceph/ceph:v19, name=trusting_shtern, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.682296977 +0000 UTC m=+0.178031405 container start 53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff (image=quay.io/ceph/ceph:v19, name=trusting_shtern, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.686907479 +0000 UTC m=+0.182641957 container attach 53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff (image=quay.io/ceph/ceph:v19, name=trusting_shtern, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:39:05 np0005549474 systemd[1]: libpod-53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff.scope: Deactivated successfully.
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.830869971 +0000 UTC m=+0.326604419 container died 53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff (image=quay.io/ceph/ceph:v19, name=trusting_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:05 np0005549474 podman[73845]: 2025-12-07 09:39:05.875599906 +0000 UTC m=+0.371334304 container remove 53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff (image=quay.io/ceph/ceph:v19, name=trusting_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 04:39:05 np0005549474 systemd[1]: libpod-conmon-53f830b54ddcdc184975ceff4337b5fa713de8aa165baed09e5c755a3a45b8ff.scope: Deactivated successfully.
Dec  7 04:39:05 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:06 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:06 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-cafaa2a4e1f01356a91a356c747152695123718322ff8a0558f5314b4a206564-merged.mount: Deactivated successfully.
Dec  7 04:39:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:06 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:06 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:06 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:06 np0005549474 systemd[1]: Reached target All Ceph clusters and services.
Dec  7 04:39:06 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:06 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:06 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:06 np0005549474 systemd[1]: Reached target Ceph cluster 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:06 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:06 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:06 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:07 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:07 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:07 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:07 np0005549474 systemd[1]: Created slice Slice /system/ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:07 np0005549474 systemd[1]: Reached target System Time Set.
Dec  7 04:39:07 np0005549474 systemd[1]: Reached target System Time Synchronized.
Dec  7 04:39:07 np0005549474 systemd[1]: Starting Ceph mon.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:39:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:07 np0005549474 podman[74141]: 2025-12-07 09:39:07.62083816 +0000 UTC m=+0.038339367 container create f6df2ae0d6d76083a3518a1481e5ab165fefc241a1328b4ef6bc5b1c2b6459f3 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc334fc0ca9c291c3571bc37738a169a87be9a99632a287ee3475131dc10792d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc334fc0ca9c291c3571bc37738a169a87be9a99632a287ee3475131dc10792d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc334fc0ca9c291c3571bc37738a169a87be9a99632a287ee3475131dc10792d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc334fc0ca9c291c3571bc37738a169a87be9a99632a287ee3475131dc10792d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 podman[74141]: 2025-12-07 09:39:07.691597963 +0000 UTC m=+0.109099190 container init f6df2ae0d6d76083a3518a1481e5ab165fefc241a1328b4ef6bc5b1c2b6459f3 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:39:07 np0005549474 podman[74141]: 2025-12-07 09:39:07.698907617 +0000 UTC m=+0.116408814 container start f6df2ae0d6d76083a3518a1481e5ab165fefc241a1328b4ef6bc5b1c2b6459f3 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:39:07 np0005549474 podman[74141]: 2025-12-07 09:39:07.603480979 +0000 UTC m=+0.020982216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:07 np0005549474 bash[74141]: f6df2ae0d6d76083a3518a1481e5ab165fefc241a1328b4ef6bc5b1c2b6459f3
Dec  7 04:39:07 np0005549474 systemd[1]: Started Ceph mon.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: pidfile_write: ignore empty --pid-file
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: load: jerasure load: lrc 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: RocksDB version: 7.9.2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Git sha 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: DB SUMMARY
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: DB Session ID:  NZBDND2RJLJQLGI25J2F
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: CURRENT file:  CURRENT
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                         Options.error_if_exists: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.create_if_missing: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                                     Options.env: 0x558150928c20
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                                Options.info_log: 0x558151d06d60
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                              Options.statistics: (nil)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                               Options.use_fsync: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                              Options.db_log_dir: 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                                 Options.wal_dir: 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                    Options.write_buffer_manager: 0x558151d0b900
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.unordered_write: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                               Options.row_cache: None
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                              Options.wal_filter: None
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.two_write_queues: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.wal_compression: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.atomic_flush: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.max_background_jobs: 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.max_background_compactions: -1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.max_subcompactions: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.max_total_wal_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                          Options.max_open_files: -1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:       Options.compaction_readahead_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Compression algorithms supported:
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kZSTD supported: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kXpressCompression supported: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kZlibCompression supported: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:           Options.merge_operator: 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:        Options.compaction_filter: None
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558151d06500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558151d2b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:        Options.write_buffer_size: 33554432
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:  Options.max_write_buffer_number: 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.compression: NoCompression
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.num_levels: 7
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100347747879, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100347749858, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "NZBDND2RJLJQLGI25J2F", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100347750011, "job": 1, "event": "recovery_finished"}
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558151d2ce00
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: DB pointer 0x558151e36000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558151d2b350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@-1(???) e0 preinit fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  7 04:39:07 np0005549474 podman[74161]: 2025-12-07 09:39:07.849432593 +0000 UTC m=+0.114507654 container create 424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e (image=quay.io/ceph/ceph:v19, name=vigilant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:39:07 np0005549474 podman[74161]: 2025-12-07 09:39:07.757678722 +0000 UTC m=+0.022753813 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T09:39:05.386379+0000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : created 2025-12-07T09:39:05.386379+0000
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).mds e1 new map
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-07T09:39:07:860179+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mkfs 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  7 04:39:07 np0005549474 systemd[1]: Started libpod-conmon-424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e.scope.
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  7 04:39:07 np0005549474 ceph-mon[74160]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 04:39:07 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a56031f3f648c895f5b99b9de26b2beb801716b195dffd23ac7aa1d9d447ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a56031f3f648c895f5b99b9de26b2beb801716b195dffd23ac7aa1d9d447ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a56031f3f648c895f5b99b9de26b2beb801716b195dffd23ac7aa1d9d447ab/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:07 np0005549474 podman[74161]: 2025-12-07 09:39:07.927324105 +0000 UTC m=+0.192399206 container init 424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e (image=quay.io/ceph/ceph:v19, name=vigilant_driscoll, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:39:07 np0005549474 podman[74161]: 2025-12-07 09:39:07.933311183 +0000 UTC m=+0.198386254 container start 424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e (image=quay.io/ceph/ceph:v19, name=vigilant_driscoll, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:39:07 np0005549474 podman[74161]: 2025-12-07 09:39:07.936280752 +0000 UTC m=+0.201355823 container attach 424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e (image=quay.io/ceph/ceph:v19, name=vigilant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817592185' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:  cluster:
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    id:     75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    health: HEALTH_OK
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]: 
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:  services:
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    mon: 1 daemons, quorum compute-0 (age 0.276243s)
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    mgr: no daemons active
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    osd: 0 osds: 0 up, 0 in
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]: 
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:  data:
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    pools:   0 pools, 0 pgs
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    objects: 0 objects, 0 B
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    usage:   0 B used, 0 B / 0 B avail
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]:    pgs:     
Dec  7 04:39:08 np0005549474 vigilant_driscoll[74216]: 
Dec  7 04:39:08 np0005549474 systemd[1]: libpod-424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e.scope: Deactivated successfully.
Dec  7 04:39:08 np0005549474 podman[74161]: 2025-12-07 09:39:08.151104311 +0000 UTC m=+0.416179402 container died 424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e (image=quay.io/ceph/ceph:v19, name=vigilant_driscoll, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Dec  7 04:39:08 np0005549474 podman[74161]: 2025-12-07 09:39:08.194091909 +0000 UTC m=+0.459166980 container remove 424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e (image=quay.io/ceph/ceph:v19, name=vigilant_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:08 np0005549474 systemd[1]: libpod-conmon-424e49e3bcccba87f662709e9ff6fac9023e9a707ad510adf591d62857a6f65e.scope: Deactivated successfully.
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.269930347 +0000 UTC m=+0.048810093 container create 81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13 (image=quay.io/ceph/ceph:v19, name=admiring_chaplygin, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 04:39:08 np0005549474 systemd[1]: Started libpod-conmon-81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13.scope.
Dec  7 04:39:08 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.248977302 +0000 UTC m=+0.027857078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a27ab54a0c594d288844e854cc99eaa7c18ad7c4569bcc933db1182874835f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a27ab54a0c594d288844e854cc99eaa7c18ad7c4569bcc933db1182874835f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a27ab54a0c594d288844e854cc99eaa7c18ad7c4569bcc933db1182874835f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a27ab54a0c594d288844e854cc99eaa7c18ad7c4569bcc933db1182874835f7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.370061989 +0000 UTC m=+0.148941755 container init 81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13 (image=quay.io/ceph/ceph:v19, name=admiring_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.37463604 +0000 UTC m=+0.153515786 container start 81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13 (image=quay.io/ceph/ceph:v19, name=admiring_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.377098175 +0000 UTC m=+0.155977921 container attach 81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13 (image=quay.io/ceph/ceph:v19, name=admiring_chaplygin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1354671929' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1354671929' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 04:39:08 np0005549474 admiring_chaplygin[74271]: 
Dec  7 04:39:08 np0005549474 admiring_chaplygin[74271]: [global]
Dec  7 04:39:08 np0005549474 admiring_chaplygin[74271]: #011fsid = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:08 np0005549474 admiring_chaplygin[74271]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  7 04:39:08 np0005549474 systemd[1]: libpod-81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13.scope: Deactivated successfully.
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.571938684 +0000 UTC m=+0.350818460 container died 81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13 (image=quay.io/ceph/ceph:v19, name=admiring_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:39:08 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3a27ab54a0c594d288844e854cc99eaa7c18ad7c4569bcc933db1182874835f7-merged.mount: Deactivated successfully.
Dec  7 04:39:08 np0005549474 podman[74254]: 2025-12-07 09:39:08.623261663 +0000 UTC m=+0.402141449 container remove 81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13 (image=quay.io/ceph/ceph:v19, name=admiring_chaplygin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:39:08 np0005549474 systemd[1]: libpod-conmon-81232f451f5a2e3e1e587bbbb5450e1461ce415c0ba22091d7da69eb09783c13.scope: Deactivated successfully.
Dec  7 04:39:08 np0005549474 podman[74309]: 2025-12-07 09:39:08.70243364 +0000 UTC m=+0.055210763 container create f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486 (image=quay.io/ceph/ceph:v19, name=kind_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:08 np0005549474 systemd[1]: Started libpod-conmon-f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486.scope.
Dec  7 04:39:08 np0005549474 podman[74309]: 2025-12-07 09:39:08.671729057 +0000 UTC m=+0.024506260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:08 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dc54482e5a250f0164d04539869bb72e30392bb14f7c0e90ddcf39bf021f1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dc54482e5a250f0164d04539869bb72e30392bb14f7c0e90ddcf39bf021f1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dc54482e5a250f0164d04539869bb72e30392bb14f7c0e90ddcf39bf021f1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dc54482e5a250f0164d04539869bb72e30392bb14f7c0e90ddcf39bf021f1f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:08 np0005549474 podman[74309]: 2025-12-07 09:39:08.795510254 +0000 UTC m=+0.148287407 container init f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486 (image=quay.io/ceph/ceph:v19, name=kind_blackburn, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:39:08 np0005549474 podman[74309]: 2025-12-07 09:39:08.800286711 +0000 UTC m=+0.153063844 container start f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486 (image=quay.io/ceph/ceph:v19, name=kind_blackburn, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:39:08 np0005549474 podman[74309]: 2025-12-07 09:39:08.803564178 +0000 UTC m=+0.156341311 container attach f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486 (image=quay.io/ceph/ceph:v19, name=kind_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: from='client.? 192.168.122.100:0/1354671929' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: from='client.? 192.168.122.100:0/1354671929' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:08 np0005549474 ceph-mon[74160]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566614672' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:08 np0005549474 systemd[1]: libpod-f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486.scope: Deactivated successfully.
Dec  7 04:39:08 np0005549474 podman[74309]: 2025-12-07 09:39:08.996144618 +0000 UTC m=+0.348921741 container died f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486 (image=quay.io/ceph/ceph:v19, name=kind_blackburn, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:39:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay-75dc54482e5a250f0164d04539869bb72e30392bb14f7c0e90ddcf39bf021f1f-merged.mount: Deactivated successfully.
Dec  7 04:39:09 np0005549474 podman[74309]: 2025-12-07 09:39:09.03969571 +0000 UTC m=+0.392472833 container remove f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486 (image=quay.io/ceph/ceph:v19, name=kind_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:39:09 np0005549474 systemd[1]: libpod-conmon-f6835a9af99c19dc56760b40a07587f1bdcb4f1e7091160dbe26253b04120486.scope: Deactivated successfully.
Dec  7 04:39:09 np0005549474 systemd[1]: Stopping Ceph mon.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:39:09 np0005549474 ceph-mon[74160]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  7 04:39:09 np0005549474 ceph-mon[74160]: mon.compute-0@0(leader) e1 shutdown
Dec  7 04:39:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0[74156]: 2025-12-07T09:39:09.203+0000 7f5b2d5c7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  7 04:39:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0[74156]: 2025-12-07T09:39:09.203+0000 7f5b2d5c7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  7 04:39:09 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  7 04:39:09 np0005549474 ceph-mon[74160]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  7 04:39:09 np0005549474 podman[74393]: 2025-12-07 09:39:09.398040529 +0000 UTC m=+0.222759139 container died f6df2ae0d6d76083a3518a1481e5ab165fefc241a1328b4ef6bc5b1c2b6459f3 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  7 04:39:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay-dc334fc0ca9c291c3571bc37738a169a87be9a99632a287ee3475131dc10792d-merged.mount: Deactivated successfully.
Dec  7 04:39:09 np0005549474 podman[74393]: 2025-12-07 09:39:09.436295283 +0000 UTC m=+0.261013863 container remove f6df2ae0d6d76083a3518a1481e5ab165fefc241a1328b4ef6bc5b1c2b6459f3 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:09 np0005549474 bash[74393]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0
Dec  7 04:39:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 04:39:09 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mon.compute-0.service: Deactivated successfully.
Dec  7 04:39:09 np0005549474 systemd[1]: Stopped Ceph mon.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:09 np0005549474 systemd[1]: Starting Ceph mon.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:39:09 np0005549474 podman[74496]: 2025-12-07 09:39:09.830528391 +0000 UTC m=+0.053931529 container create 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 04:39:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8386be745216607b23b27776d4da880bf444951b8b947f286171da3d6c51f9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8386be745216607b23b27776d4da880bf444951b8b947f286171da3d6c51f9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8386be745216607b23b27776d4da880bf444951b8b947f286171da3d6c51f9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8386be745216607b23b27776d4da880bf444951b8b947f286171da3d6c51f9d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:09 np0005549474 podman[74496]: 2025-12-07 09:39:09.811412436 +0000 UTC m=+0.034815604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:09 np0005549474 podman[74496]: 2025-12-07 09:39:09.910895469 +0000 UTC m=+0.134298647 container init 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:39:09 np0005549474 podman[74496]: 2025-12-07 09:39:09.923853383 +0000 UTC m=+0.147256521 container start 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:39:09 np0005549474 bash[74496]: 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303
Dec  7 04:39:09 np0005549474 systemd[1]: Started Ceph mon.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: pidfile_write: ignore empty --pid-file
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: load: jerasure load: lrc 
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: RocksDB version: 7.9.2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Git sha 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: DB SUMMARY
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: DB Session ID:  JT62X3AUQJPC1MNA6VWA
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: CURRENT file:  CURRENT
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                         Options.error_if_exists: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.create_if_missing: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                                     Options.env: 0x5637d826dc20
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                                Options.info_log: 0x5637d9e82e20
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                              Options.statistics: (nil)
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                               Options.use_fsync: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                              Options.db_log_dir: 
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                                 Options.wal_dir: 
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                    Options.write_buffer_manager: 0x5637d9e87900
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.unordered_write: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                               Options.row_cache: None
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                              Options.wal_filter: None
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.two_write_queues: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.wal_compression: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.atomic_flush: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.max_background_jobs: 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.max_background_compactions: -1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.max_subcompactions: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.max_total_wal_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                          Options.max_open_files: -1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:       Options.compaction_readahead_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Compression algorithms supported:
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kZSTD supported: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kXpressCompression supported: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kZlibCompression supported: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:           Options.merge_operator: 
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:        Options.compaction_filter: None
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d9e82aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5637d9ea7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:        Options.write_buffer_size: 33554432
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:  Options.max_write_buffer_number: 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.compression: NoCompression
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.num_levels: 7
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100349972778, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100349979722, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100349, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100349979992, "job": 1, "event": "recovery_finished"}
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5637d9ea8e00
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: DB pointer 0x5637d9fb2000
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.82 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.82 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5637d9ea7350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???) e1 preinit fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).mds e1 new map
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-07T09:39:07:860179+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  7 04:39:09 np0005549474 ceph-mon[74516]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T09:39:05.386379+0000
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : created 2025-12-07T09:39:05.386379+0000
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:10.005841244 +0000 UTC m=+0.051279239 container create 357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130 (image=quay.io/ceph/ceph:v19, name=magical_greider, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:39:10 np0005549474 systemd[1]: Started libpod-conmon-357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130.scope.
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:09.978312265 +0000 UTC m=+0.023750280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:10 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c7f75115cc9933362ee9507c6da3b98f2223717df3444650bf1b480d6511c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c7f75115cc9933362ee9507c6da3b98f2223717df3444650bf1b480d6511c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c7f75115cc9933362ee9507c6da3b98f2223717df3444650bf1b480d6511c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:10.09673597 +0000 UTC m=+0.142173975 container init 357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130 (image=quay.io/ceph/ceph:v19, name=magical_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:10.103024597 +0000 UTC m=+0.148462612 container start 357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130 (image=quay.io/ceph/ceph:v19, name=magical_greider, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:10.107413133 +0000 UTC m=+0.152851138 container attach 357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130 (image=quay.io/ceph/ceph:v19, name=magical_greider, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  7 04:39:10 np0005549474 systemd[1]: libpod-357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130.scope: Deactivated successfully.
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:10.355078902 +0000 UTC m=+0.400516917 container died 357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130 (image=quay.io/ceph/ceph:v19, name=magical_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 04:39:10 np0005549474 podman[74517]: 2025-12-07 09:39:10.40449285 +0000 UTC m=+0.449930865 container remove 357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130 (image=quay.io/ceph/ceph:v19, name=magical_greider, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:10 np0005549474 systemd[1]: libpod-conmon-357a2b2d030d1aeafc71b8363a7226396e4973a1b8b2b779de5c37f106a58130.scope: Deactivated successfully.
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.487166939 +0000 UTC m=+0.052414759 container create f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074 (image=quay.io/ceph/ceph:v19, name=friendly_golick, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:10 np0005549474 systemd[1]: Started libpod-conmon-f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074.scope.
Dec  7 04:39:10 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f5f42075f94ad679a8b5a7ac6e2c0fcb45f64b0204b0101f2369a2649bd4dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f5f42075f94ad679a8b5a7ac6e2c0fcb45f64b0204b0101f2369a2649bd4dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f5f42075f94ad679a8b5a7ac6e2c0fcb45f64b0204b0101f2369a2649bd4dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.465571907 +0000 UTC m=+0.030819737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.573756292 +0000 UTC m=+0.139004112 container init f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074 (image=quay.io/ceph/ceph:v19, name=friendly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.583242443 +0000 UTC m=+0.148490253 container start f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074 (image=quay.io/ceph/ceph:v19, name=friendly_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.586937971 +0000 UTC m=+0.152185801 container attach f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074 (image=quay.io/ceph/ceph:v19, name=friendly_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  7 04:39:10 np0005549474 systemd[1]: libpod-f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074.scope: Deactivated successfully.
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.815631137 +0000 UTC m=+0.380878987 container died f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074 (image=quay.io/ceph/ceph:v19, name=friendly_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:10 np0005549474 systemd[1]: var-lib-containers-storage-overlay-86f5f42075f94ad679a8b5a7ac6e2c0fcb45f64b0204b0101f2369a2649bd4dd-merged.mount: Deactivated successfully.
Dec  7 04:39:10 np0005549474 podman[74609]: 2025-12-07 09:39:10.86333942 +0000 UTC m=+0.428587200 container remove f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074 (image=quay.io/ceph/ceph:v19, name=friendly_golick, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:39:10 np0005549474 systemd[1]: libpod-conmon-f26ca45f72283abb7810ce468eda3379c94469f6db7f5a5727d21bf065930074.scope: Deactivated successfully.
Dec  7 04:39:10 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:10 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:10 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:11 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:11 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:11 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:11 np0005549474 systemd[1]: Starting Ceph mgr.compute-0.dotugk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:39:11 np0005549474 podman[74792]: 2025-12-07 09:39:11.662170663 +0000 UTC m=+0.037918825 container create 7d74b23a9f56d9be32a8e01867dfed7d341a2fe19e0e6c908d25ffcc08921ab5 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73837d5c61add16510bf90550fdb60a966447ee58562e1ffc23597cd67521f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73837d5c61add16510bf90550fdb60a966447ee58562e1ffc23597cd67521f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73837d5c61add16510bf90550fdb60a966447ee58562e1ffc23597cd67521f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e73837d5c61add16510bf90550fdb60a966447ee58562e1ffc23597cd67521f1/merged/var/lib/ceph/mgr/ceph-compute-0.dotugk supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 podman[74792]: 2025-12-07 09:39:11.727074821 +0000 UTC m=+0.102823003 container init 7d74b23a9f56d9be32a8e01867dfed7d341a2fe19e0e6c908d25ffcc08921ab5 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 04:39:11 np0005549474 podman[74792]: 2025-12-07 09:39:11.733326568 +0000 UTC m=+0.109074710 container start 7d74b23a9f56d9be32a8e01867dfed7d341a2fe19e0e6c908d25ffcc08921ab5 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:39:11 np0005549474 bash[74792]: 7d74b23a9f56d9be32a8e01867dfed7d341a2fe19e0e6c908d25ffcc08921ab5
Dec  7 04:39:11 np0005549474 podman[74792]: 2025-12-07 09:39:11.642762149 +0000 UTC m=+0.018510311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:11 np0005549474 systemd[1]: Started Ceph mgr.compute-0.dotugk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:11 np0005549474 ceph-mgr[74811]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:39:11 np0005549474 ceph-mgr[74811]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 04:39:11 np0005549474 ceph-mgr[74811]: pidfile_write: ignore empty --pid-file
Dec  7 04:39:11 np0005549474 podman[74812]: 2025-12-07 09:39:11.814550878 +0000 UTC m=+0.045524517 container create 23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1 (image=quay.io/ceph/ceph:v19, name=elastic_nightingale, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 04:39:11 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'alerts'
Dec  7 04:39:11 np0005549474 systemd[1]: Started libpod-conmon-23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1.scope.
Dec  7 04:39:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf0b22767a86e6557974395ea3b4df9b57d4f75960f9a37784a71b3a748d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf0b22767a86e6557974395ea3b4df9b57d4f75960f9a37784a71b3a748d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf0b22767a86e6557974395ea3b4df9b57d4f75960f9a37784a71b3a748d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:11 np0005549474 podman[74812]: 2025-12-07 09:39:11.797676232 +0000 UTC m=+0.028649871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:11 np0005549474 podman[74812]: 2025-12-07 09:39:11.898658965 +0000 UTC m=+0.129632634 container init 23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1 (image=quay.io/ceph/ceph:v19, name=elastic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:39:11 np0005549474 podman[74812]: 2025-12-07 09:39:11.912829601 +0000 UTC m=+0.143803270 container start 23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1 (image=quay.io/ceph/ceph:v19, name=elastic_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:11 np0005549474 podman[74812]: 2025-12-07 09:39:11.917315429 +0000 UTC m=+0.148289088 container attach 23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1 (image=quay.io/ceph/ceph:v19, name=elastic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:11 np0005549474 ceph-mgr[74811]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:39:11 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'balancer'
Dec  7 04:39:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:11.935+0000 7f8aa0298140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:39:12 np0005549474 ceph-mgr[74811]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:39:12 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'cephadm'
Dec  7 04:39:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:12.011+0000 7f8aa0298140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:39:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 04:39:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793017661' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]: 
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]: {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "health": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "status": "HEALTH_OK",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "checks": {},
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "mutes": []
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "election_epoch": 5,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "quorum": [
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        0
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    ],
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "quorum_names": [
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "compute-0"
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    ],
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "quorum_age": 2,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "monmap": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "epoch": 1,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "min_mon_release_name": "squid",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_mons": 1
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "osdmap": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "epoch": 1,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_osds": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_up_osds": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "osd_up_since": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_in_osds": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "osd_in_since": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_remapped_pgs": 0
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "pgmap": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "pgs_by_state": [],
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_pgs": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_pools": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_objects": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "data_bytes": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "bytes_used": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "bytes_avail": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "bytes_total": 0
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "fsmap": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "epoch": 1,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "btime": "2025-12-07T09:39:07:860179+0000",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "by_rank": [],
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "up:standby": 0
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "mgrmap": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "available": false,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "num_standbys": 0,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "modules": [
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:            "iostat",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:            "nfs",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:            "restful"
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        ],
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "services": {}
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "servicemap": {
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "epoch": 1,
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "modified": "2025-12-07T09:39:07.862134+0000",
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:        "services": {}
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    },
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]:    "progress_events": {}
Dec  7 04:39:12 np0005549474 elastic_nightingale[74849]: }
Dec  7 04:39:12 np0005549474 systemd[1]: libpod-23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1.scope: Deactivated successfully.
Dec  7 04:39:12 np0005549474 podman[74812]: 2025-12-07 09:39:12.138569388 +0000 UTC m=+0.369543017 container died 23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1 (image=quay.io/ceph/ceph:v19, name=elastic_nightingale, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2babf0b22767a86e6557974395ea3b4df9b57d4f75960f9a37784a71b3a748d5-merged.mount: Deactivated successfully.
Dec  7 04:39:12 np0005549474 podman[74812]: 2025-12-07 09:39:12.194645302 +0000 UTC m=+0.425618971 container remove 23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1 (image=quay.io/ceph/ceph:v19, name=elastic_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:12 np0005549474 systemd[1]: libpod-conmon-23b785c54dccaf9f79f4f15d5251c0fd2e20d33428ea0622bd1c6ce9d3639db1.scope: Deactivated successfully.
Dec  7 04:39:12 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'crash'
Dec  7 04:39:12 np0005549474 ceph-mgr[74811]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:39:12 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'dashboard'
Dec  7 04:39:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:12.832+0000 7f8aa0298140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'devicehealth'
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:13.442+0000 7f8aa0298140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  from numpy import show_config as show_numpy_config
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:13.606+0000 7f8aa0298140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'influx'
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:13.680+0000 7f8aa0298140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'insights'
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'iostat'
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:13.809+0000 7f8aa0298140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:39:13 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'k8sevents'
Dec  7 04:39:14 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'localpool'
Dec  7 04:39:14 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 04:39:14 np0005549474 podman[74898]: 2025-12-07 09:39:14.272367161 +0000 UTC m=+0.049330398 container create a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56 (image=quay.io/ceph/ceph:v19, name=focused_zhukovsky, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:39:14 np0005549474 systemd[1]: Started libpod-conmon-a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56.scope.
Dec  7 04:39:14 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f930678a441c1003f3440f53a638fb2462ac83324cc2d0fbeba25affbdb9570/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f930678a441c1003f3440f53a638fb2462ac83324cc2d0fbeba25affbdb9570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f930678a441c1003f3440f53a638fb2462ac83324cc2d0fbeba25affbdb9570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:14 np0005549474 podman[74898]: 2025-12-07 09:39:14.253183053 +0000 UTC m=+0.030146320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:14 np0005549474 podman[74898]: 2025-12-07 09:39:14.360851954 +0000 UTC m=+0.137815181 container init a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56 (image=quay.io/ceph/ceph:v19, name=focused_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 04:39:14 np0005549474 podman[74898]: 2025-12-07 09:39:14.365370243 +0000 UTC m=+0.142333460 container start a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56 (image=quay.io/ceph/ceph:v19, name=focused_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:14 np0005549474 podman[74898]: 2025-12-07 09:39:14.368711702 +0000 UTC m=+0.145674949 container attach a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56 (image=quay.io/ceph/ceph:v19, name=focused_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:14 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mirroring'
Dec  7 04:39:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 04:39:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140175434' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]: 
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]: {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "health": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "status": "HEALTH_OK",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "checks": {},
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "mutes": []
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "election_epoch": 5,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "quorum": [
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        0
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    ],
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "quorum_names": [
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "compute-0"
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    ],
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "quorum_age": 4,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "monmap": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "epoch": 1,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "min_mon_release_name": "squid",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_mons": 1
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "osdmap": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "epoch": 1,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_osds": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_up_osds": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "osd_up_since": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_in_osds": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "osd_in_since": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_remapped_pgs": 0
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "pgmap": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "pgs_by_state": [],
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_pgs": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_pools": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_objects": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "data_bytes": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "bytes_used": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "bytes_avail": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "bytes_total": 0
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "fsmap": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "epoch": 1,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "btime": "2025-12-07T09:39:07:860179+0000",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "by_rank": [],
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "up:standby": 0
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "mgrmap": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "available": false,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "num_standbys": 0,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "modules": [
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:            "iostat",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:            "nfs",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:            "restful"
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        ],
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "services": {}
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "servicemap": {
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "epoch": 1,
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "modified": "2025-12-07T09:39:07.862134+0000",
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:        "services": {}
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    },
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]:    "progress_events": {}
Dec  7 04:39:14 np0005549474 focused_zhukovsky[74915]: }
Dec  7 04:39:14 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'nfs'
Dec  7 04:39:14 np0005549474 systemd[1]: libpod-a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56.scope: Deactivated successfully.
Dec  7 04:39:14 np0005549474 podman[74941]: 2025-12-07 09:39:14.588823851 +0000 UTC m=+0.023368280 container died a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56 (image=quay.io/ceph/ceph:v19, name=focused_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:39:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6f930678a441c1003f3440f53a638fb2462ac83324cc2d0fbeba25affbdb9570-merged.mount: Deactivated successfully.
Dec  7 04:39:14 np0005549474 podman[74941]: 2025-12-07 09:39:14.624089905 +0000 UTC m=+0.058634314 container remove a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56 (image=quay.io/ceph/ceph:v19, name=focused_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:14 np0005549474 systemd[1]: libpod-conmon-a931d80d2ccc724cf7a692085788fca9aa84d5073a562a7afc890385ba35dd56.scope: Deactivated successfully.
Dec  7 04:39:14 np0005549474 ceph-mgr[74811]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:39:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:14.792+0000 7f8aa0298140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:39:14 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'orchestrator'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.009+0000 7f8aa0298140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.081+0000 7f8aa0298140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_support'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.150+0000 7f8aa0298140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.230+0000 7f8aa0298140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'progress'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.300+0000 7f8aa0298140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'prometheus'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.631+0000 7f8aa0298140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rbd_support'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:15.725+0000 7f8aa0298140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'restful'
Dec  7 04:39:15 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rgw'
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:16.144+0000 7f8aa0298140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rook'
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'selftest'
Dec  7 04:39:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:16.693+0000 7f8aa0298140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 podman[74956]: 2025-12-07 09:39:16.708180041 +0000 UTC m=+0.055281895 container create 335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f (image=quay.io/ceph/ceph:v19, name=blissful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:16 np0005549474 systemd[1]: Started libpod-conmon-335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f.scope.
Dec  7 04:39:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b710112064fdfb159a354d2dd38dc25d0bbd6f1651b4a9824d541b077e679ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b710112064fdfb159a354d2dd38dc25d0bbd6f1651b4a9824d541b077e679ae3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b710112064fdfb159a354d2dd38dc25d0bbd6f1651b4a9824d541b077e679ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:16 np0005549474 podman[74956]: 2025-12-07 09:39:16.761671208 +0000 UTC m=+0.108773072 container init 335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f (image=quay.io/ceph/ceph:v19, name=blissful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'snap_schedule'
Dec  7 04:39:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:16.763+0000 7f8aa0298140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 podman[74956]: 2025-12-07 09:39:16.766082394 +0000 UTC m=+0.113184248 container start 335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f (image=quay.io/ceph/ceph:v19, name=blissful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:39:16 np0005549474 podman[74956]: 2025-12-07 09:39:16.674536681 +0000 UTC m=+0.021638635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:16 np0005549474 podman[74956]: 2025-12-07 09:39:16.76856174 +0000 UTC m=+0.115663594 container attach 335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f (image=quay.io/ceph/ceph:v19, name=blissful_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'stats'
Dec  7 04:39:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:16.843+0000 7f8aa0298140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:39:16 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'status'
Dec  7 04:39:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 04:39:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427166058' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]: 
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]: {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "health": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "status": "HEALTH_OK",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "checks": {},
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "mutes": []
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "election_epoch": 5,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "quorum": [
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        0
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    ],
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "quorum_names": [
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "compute-0"
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    ],
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "quorum_age": 6,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "monmap": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "epoch": 1,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "min_mon_release_name": "squid",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_mons": 1
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "osdmap": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "epoch": 1,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_osds": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_up_osds": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "osd_up_since": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_in_osds": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "osd_in_since": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_remapped_pgs": 0
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "pgmap": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "pgs_by_state": [],
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_pgs": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_pools": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_objects": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "data_bytes": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "bytes_used": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "bytes_avail": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "bytes_total": 0
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "fsmap": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "epoch": 1,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "btime": "2025-12-07T09:39:07:860179+0000",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "by_rank": [],
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "up:standby": 0
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "mgrmap": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "available": false,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "num_standbys": 0,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "modules": [
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:            "iostat",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:            "nfs",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:            "restful"
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        ],
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "services": {}
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "servicemap": {
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "epoch": 1,
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "modified": "2025-12-07T09:39:07.862134+0000",
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:        "services": {}
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    },
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]:    "progress_events": {}
Dec  7 04:39:16 np0005549474 blissful_engelbart[74973]: }
Dec  7 04:39:16 np0005549474 systemd[1]: libpod-335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f.scope: Deactivated successfully.
Dec  7 04:39:16 np0005549474 podman[74956]: 2025-12-07 09:39:16.959172647 +0000 UTC m=+0.306274531 container died 335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f (image=quay.io/ceph/ceph:v19, name=blissful_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:39:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b710112064fdfb159a354d2dd38dc25d0bbd6f1651b4a9824d541b077e679ae3-merged.mount: Deactivated successfully.
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telegraf'
Dec  7 04:39:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:17.007+0000 7f8aa0298140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 podman[74956]: 2025-12-07 09:39:17.011362349 +0000 UTC m=+0.358464243 container remove 335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f (image=quay.io/ceph/ceph:v19, name=blissful_engelbart, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:39:17 np0005549474 systemd[1]: libpod-conmon-335a60f844d206475d013eab4cf7b49c319ea868a19b0aba7ed156779f277f2f.scope: Deactivated successfully.
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telemetry'
Dec  7 04:39:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:17.077+0000 7f8aa0298140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 04:39:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:17.232+0000 7f8aa0298140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'volumes'
Dec  7 04:39:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:17.459+0000 7f8aa0298140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'zabbix'
Dec  7 04:39:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:17.741+0000 7f8aa0298140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:17.815+0000 7f8aa0298140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: ms_deliver_dispatch: unhandled message 0x55b5498b09c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dotugk
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map Activating!
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map I am now activating
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.dotugk(active, starting, since 0.0252584s)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: balancer
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [balancer INFO root] Starting
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.dotugk is now available
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: crash
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:39:17
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [balancer INFO root] No pools available
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: devicehealth
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Starting
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: iostat
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: nfs
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: orchestrator
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: pg_autoscaler
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: progress
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [progress INFO root] Loading...
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [progress INFO root] No stored events to load
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded [] historic events
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] recovery thread starting
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] starting setup
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: rbd_support
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: restful
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: status
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: telemetry
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [restful WARNING root] server not running: no certificate configured
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] PerfHandler: starting
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TaskHandler: starting
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"} v 0)
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] setup complete
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  7 04:39:17 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: volumes
Dec  7 04:39:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: Activating manager daemon compute-0.dotugk
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: Manager daemon compute-0.dotugk is now available
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/4283256644' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.dotugk(active, since 1.05891s)
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.085084361 +0000 UTC m=+0.043936094 container create 6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342 (image=quay.io/ceph/ceph:v19, name=great_hodgkin, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:39:19 np0005549474 systemd[1]: Started libpod-conmon-6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342.scope.
Dec  7 04:39:19 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.065682858 +0000 UTC m=+0.024534691 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08b57df68db441157915693deef971b9f535d5045406af1c2485295454d6dfa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08b57df68db441157915693deef971b9f535d5045406af1c2485295454d6dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08b57df68db441157915693deef971b9f535d5045406af1c2485295454d6dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.184543504 +0000 UTC m=+0.143395257 container init 6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342 (image=quay.io/ceph/ceph:v19, name=great_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.200844536 +0000 UTC m=+0.159696319 container start 6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342 (image=quay.io/ceph/ceph:v19, name=great_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.205652014 +0000 UTC m=+0.164503807 container attach 6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342 (image=quay.io/ceph/ceph:v19, name=great_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 04:39:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876866283' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]: 
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]: {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "health": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "status": "HEALTH_OK",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "checks": {},
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "mutes": []
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "election_epoch": 5,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "quorum": [
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        0
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    ],
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "quorum_names": [
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "compute-0"
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    ],
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "quorum_age": 9,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "monmap": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "epoch": 1,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "min_mon_release_name": "squid",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_mons": 1
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "osdmap": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "epoch": 1,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_osds": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_up_osds": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "osd_up_since": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_in_osds": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "osd_in_since": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_remapped_pgs": 0
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "pgmap": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "pgs_by_state": [],
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_pgs": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_pools": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_objects": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "data_bytes": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "bytes_used": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "bytes_avail": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "bytes_total": 0
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "fsmap": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "epoch": 1,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "btime": "2025-12-07T09:39:07:860179+0000",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "by_rank": [],
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "up:standby": 0
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "mgrmap": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "available": true,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "num_standbys": 0,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "modules": [
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:            "iostat",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:            "nfs",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:            "restful"
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        ],
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "services": {}
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "servicemap": {
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "epoch": 1,
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "modified": "2025-12-07T09:39:07.862134+0000",
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:        "services": {}
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    },
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]:    "progress_events": {}
Dec  7 04:39:19 np0005549474 great_hodgkin[75112]: }
Dec  7 04:39:19 np0005549474 systemd[1]: libpod-6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342.scope: Deactivated successfully.
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.633279858 +0000 UTC m=+0.592131631 container died 6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342 (image=quay.io/ceph/ceph:v19, name=great_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:19 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e08b57df68db441157915693deef971b9f535d5045406af1c2485295454d6dfa-merged.mount: Deactivated successfully.
Dec  7 04:39:19 np0005549474 podman[75095]: 2025-12-07 09:39:19.681536465 +0000 UTC m=+0.640388248 container remove 6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342 (image=quay.io/ceph/ceph:v19, name=great_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 04:39:19 np0005549474 systemd[1]: libpod-conmon-6ffb97fc9b7b286667a75ea42759ee47873d0d61e19ccb369cc9b81b102e6342.scope: Deactivated successfully.
Dec  7 04:39:19 np0005549474 podman[75153]: 2025-12-07 09:39:19.75383591 +0000 UTC m=+0.048037993 container create ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b (image=quay.io/ceph/ceph:v19, name=modest_yalow, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:39:19 np0005549474 systemd[1]: Started libpod-conmon-ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b.scope.
Dec  7 04:39:19 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e930d5582cb894c89891d8de08fdea7a5aede4c0cce09dcaf7478c03123f6522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e930d5582cb894c89891d8de08fdea7a5aede4c0cce09dcaf7478c03123f6522/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e930d5582cb894c89891d8de08fdea7a5aede4c0cce09dcaf7478c03123f6522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e930d5582cb894c89891d8de08fdea7a5aede4c0cce09dcaf7478c03123f6522/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:19 np0005549474 podman[75153]: 2025-12-07 09:39:19.72931026 +0000 UTC m=+0.023512383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:19 np0005549474 podman[75153]: 2025-12-07 09:39:19.831108626 +0000 UTC m=+0.125310739 container init ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b (image=quay.io/ceph/ceph:v19, name=modest_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:39:19 np0005549474 podman[75153]: 2025-12-07 09:39:19.839829446 +0000 UTC m=+0.134031559 container start ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b (image=quay.io/ceph/ceph:v19, name=modest_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:19 np0005549474 podman[75153]: 2025-12-07 09:39:19.84371411 +0000 UTC m=+0.137916193 container attach ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b (image=quay.io/ceph/ceph:v19, name=modest_yalow, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:19 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:19 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.dotugk(active, since 2s)
Dec  7 04:39:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  7 04:39:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2453693767' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 04:39:20 np0005549474 modest_yalow[75170]: 
Dec  7 04:39:20 np0005549474 modest_yalow[75170]: [global]
Dec  7 04:39:20 np0005549474 modest_yalow[75170]: #011fsid = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:39:20 np0005549474 modest_yalow[75170]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  7 04:39:20 np0005549474 systemd[1]: libpod-ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b.scope: Deactivated successfully.
Dec  7 04:39:20 np0005549474 podman[75153]: 2025-12-07 09:39:20.257560015 +0000 UTC m=+0.551762108 container died ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b (image=quay.io/ceph/ceph:v19, name=modest_yalow, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:39:20 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e930d5582cb894c89891d8de08fdea7a5aede4c0cce09dcaf7478c03123f6522-merged.mount: Deactivated successfully.
Dec  7 04:39:20 np0005549474 podman[75153]: 2025-12-07 09:39:20.31018037 +0000 UTC m=+0.604382493 container remove ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b (image=quay.io/ceph/ceph:v19, name=modest_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec  7 04:39:20 np0005549474 systemd[1]: libpod-conmon-ce8bd61f7108a1148f33122eb03de1a928ec0224865f1d1a86031dc43997a52b.scope: Deactivated successfully.
Dec  7 04:39:20 np0005549474 podman[75209]: 2025-12-07 09:39:20.387562093 +0000 UTC m=+0.052703679 container create 46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41 (image=quay.io/ceph/ceph:v19, name=thirsty_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:20 np0005549474 systemd[1]: Started libpod-conmon-46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41.scope.
Dec  7 04:39:20 np0005549474 podman[75209]: 2025-12-07 09:39:20.362877008 +0000 UTC m=+0.028018594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:20 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7e3c6fac09f828c86905c8a151a0434701bf278c69fbf2ce7ba62c8522ec00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7e3c6fac09f828c86905c8a151a0434701bf278c69fbf2ce7ba62c8522ec00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7e3c6fac09f828c86905c8a151a0434701bf278c69fbf2ce7ba62c8522ec00/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:20 np0005549474 podman[75209]: 2025-12-07 09:39:20.505056159 +0000 UTC m=+0.170197735 container init 46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41 (image=quay.io/ceph/ceph:v19, name=thirsty_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:20 np0005549474 podman[75209]: 2025-12-07 09:39:20.515536177 +0000 UTC m=+0.180677733 container start 46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41 (image=quay.io/ceph/ceph:v19, name=thirsty_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:39:20 np0005549474 podman[75209]: 2025-12-07 09:39:20.519508152 +0000 UTC m=+0.184649738 container attach 46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41 (image=quay.io/ceph/ceph:v19, name=thirsty_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:20 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2453693767' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 04:39:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  7 04:39:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746929021' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:21 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/746929021' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  7 04:39:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746929021' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  1: '-n'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  2: 'mgr.compute-0.dotugk'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  3: '-f'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  4: '--setuser'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  5: 'ceph'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  6: '--setgroup'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  7: 'ceph'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  7 04:39:21 np0005549474 ceph-mgr[74811]: mgr respawn  exe_path /proc/self/exe
Dec  7 04:39:21 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.dotugk(active, since 4s)
Dec  7 04:39:21 np0005549474 systemd[1]: libpod-46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41.scope: Deactivated successfully.
Dec  7 04:39:21 np0005549474 podman[75209]: 2025-12-07 09:39:21.945825089 +0000 UTC m=+1.610966665 container died 46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41 (image=quay.io/ceph/ceph:v19, name=thirsty_visvesvaraya, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay-cf7e3c6fac09f828c86905c8a151a0434701bf278c69fbf2ce7ba62c8522ec00-merged.mount: Deactivated successfully.
Dec  7 04:39:21 np0005549474 podman[75209]: 2025-12-07 09:39:21.99414244 +0000 UTC m=+1.659283996 container remove 46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41 (image=quay.io/ceph/ceph:v19, name=thirsty_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:22 np0005549474 systemd[1]: libpod-conmon-46cf9d50420b50d12057f9a5d218d12ccb44e8744f5b778094c9178480869e41.scope: Deactivated successfully.
Dec  7 04:39:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setuser ceph since I am not root
Dec  7 04:39:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setgroup ceph since I am not root
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: pidfile_write: ignore empty --pid-file
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.067474956 +0000 UTC m=+0.047987755 container create c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'alerts'
Dec  7 04:39:22 np0005549474 systemd[1]: Started libpod-conmon-c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102.scope.
Dec  7 04:39:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23b1a9a5cb8b0b28bc30d7be0c2ab708fe318fa3e6d80ec4e8496f3c33fad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23b1a9a5cb8b0b28bc30d7be0c2ab708fe318fa3e6d80ec4e8496f3c33fad0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23b1a9a5cb8b0b28bc30d7be0c2ab708fe318fa3e6d80ec4e8496f3c33fad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.047029973 +0000 UTC m=+0.027542812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.160714668 +0000 UTC m=+0.141227547 container init c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'balancer'
Dec  7 04:39:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:22.164+0000 7f843aaf4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.173745784 +0000 UTC m=+0.154258603 container start c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.17812188 +0000 UTC m=+0.158634709 container attach c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'cephadm'
Dec  7 04:39:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:22.238+0000 7f843aaf4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:39:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  7 04:39:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/658836890' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  7 04:39:22 np0005549474 jovial_rhodes[75299]: {
Dec  7 04:39:22 np0005549474 jovial_rhodes[75299]:    "epoch": 5,
Dec  7 04:39:22 np0005549474 jovial_rhodes[75299]:    "available": true,
Dec  7 04:39:22 np0005549474 jovial_rhodes[75299]:    "active_name": "compute-0.dotugk",
Dec  7 04:39:22 np0005549474 jovial_rhodes[75299]:    "num_standby": 0
Dec  7 04:39:22 np0005549474 jovial_rhodes[75299]: }
Dec  7 04:39:22 np0005549474 systemd[1]: libpod-c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102.scope: Deactivated successfully.
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.570454815 +0000 UTC m=+0.550967644 container died c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:39:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7c23b1a9a5cb8b0b28bc30d7be0c2ab708fe318fa3e6d80ec4e8496f3c33fad0-merged.mount: Deactivated successfully.
Dec  7 04:39:22 np0005549474 podman[75263]: 2025-12-07 09:39:22.615435677 +0000 UTC m=+0.595948466 container remove c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:22 np0005549474 systemd[1]: libpod-conmon-c918af4ea61ed012bd27e43e9bc38ca63dbb8fa3e69b3846e56ae16f01e08102.scope: Deactivated successfully.
Dec  7 04:39:22 np0005549474 podman[75349]: 2025-12-07 09:39:22.695250475 +0000 UTC m=+0.052461133 container create 4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb (image=quay.io/ceph/ceph:v19, name=busy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:39:22 np0005549474 systemd[1]: Started libpod-conmon-4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb.scope.
Dec  7 04:39:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:22 np0005549474 podman[75349]: 2025-12-07 09:39:22.679670641 +0000 UTC m=+0.036881329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e1aadc4f5e9ce0094ea6b472d0a1b9bd9792251aaba7d70851e2d43ecb3abd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e1aadc4f5e9ce0094ea6b472d0a1b9bd9792251aaba7d70851e2d43ecb3abd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e1aadc4f5e9ce0094ea6b472d0a1b9bd9792251aaba7d70851e2d43ecb3abd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:22 np0005549474 podman[75349]: 2025-12-07 09:39:22.792148734 +0000 UTC m=+0.149359502 container init 4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb (image=quay.io/ceph/ceph:v19, name=busy_swartz, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:22 np0005549474 podman[75349]: 2025-12-07 09:39:22.799332895 +0000 UTC m=+0.156543603 container start 4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb (image=quay.io/ceph/ceph:v19, name=busy_swartz, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:39:22 np0005549474 podman[75349]: 2025-12-07 09:39:22.803714561 +0000 UTC m=+0.160925269 container attach 4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb (image=quay.io/ceph/ceph:v19, name=busy_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 04:39:22 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/746929021' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'crash'
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:39:22 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'dashboard'
Dec  7 04:39:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:22.991+0000 7f843aaf4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'devicehealth'
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:23.599+0000 7f843aaf4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  from numpy import show_config as show_numpy_config
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'influx'
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:23.754+0000 7f843aaf4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'insights'
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:23.823+0000 7f843aaf4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'iostat'
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:39:23 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'k8sevents'
Dec  7 04:39:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:23.959+0000 7f843aaf4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:39:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'localpool'
Dec  7 04:39:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 04:39:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mirroring'
Dec  7 04:39:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'nfs'
Dec  7 04:39:24 np0005549474 ceph-mgr[74811]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:39:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'orchestrator'
Dec  7 04:39:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:24.899+0000 7f843aaf4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.114+0000 7f843aaf4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_support'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.185+0000 7f843aaf4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.248+0000 7f843aaf4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'progress'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.322+0000 7f843aaf4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'prometheus'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.388+0000 7f843aaf4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rbd_support'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.725+0000 7f843aaf4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:39:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'restful'
Dec  7 04:39:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:25.830+0000 7f843aaf4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rgw'
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rook'
Dec  7 04:39:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:26.247+0000 7f843aaf4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'selftest'
Dec  7 04:39:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:26.786+0000 7f843aaf4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'snap_schedule'
Dec  7 04:39:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:26.856+0000 7f843aaf4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:39:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'stats'
Dec  7 04:39:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:26.936+0000 7f843aaf4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'status'
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telegraf'
Dec  7 04:39:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:27.075+0000 7f843aaf4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telemetry'
Dec  7 04:39:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:27.141+0000 7f843aaf4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 04:39:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:27.285+0000 7f843aaf4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'volumes'
Dec  7 04:39:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:27.489+0000 7f843aaf4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'zabbix'
Dec  7 04:39:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:27.740+0000 7f843aaf4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:39:27.805+0000 7f843aaf4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dotugk restarted
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dotugk
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: ms_deliver_dispatch: unhandled message 0x55a836faad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map Activating!
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map I am now activating
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.dotugk(active, starting, since 0.0135085s)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: balancer
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.dotugk is now available
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Starting
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:39:27
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] No pools available
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: Active manager daemon compute-0.dotugk restarted
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: Activating manager daemon compute-0.dotugk
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: Manager daemon compute-0.dotugk is now available
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: cephadm
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: crash
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: devicehealth
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: iostat
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Starting
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: nfs
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: orchestrator
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: pg_autoscaler
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: progress
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [progress INFO root] Loading...
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [progress INFO root] No stored events to load
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded [] historic events
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] recovery thread starting
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] starting setup
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: rbd_support
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: restful
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: status
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: telemetry
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [restful WARNING root] server not running: no certificate configured
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] PerfHandler: starting
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TaskHandler: starting
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"} v 0)
Dec  7 04:39:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] setup complete
Dec  7 04:39:27 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: volumes
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: Found migration_current of "None". Setting to last migration.
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:39:28 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  7 04:39:28 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.dotugk(active, since 1.145s)
Dec  7 04:39:28 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  7 04:39:28 np0005549474 busy_swartz[75367]: {
Dec  7 04:39:28 np0005549474 busy_swartz[75367]:    "mgrmap_epoch": 7,
Dec  7 04:39:28 np0005549474 busy_swartz[75367]:    "initialized": true
Dec  7 04:39:28 np0005549474 busy_swartz[75367]: }
Dec  7 04:39:28 np0005549474 systemd[1]: libpod-4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb.scope: Deactivated successfully.
Dec  7 04:39:28 np0005549474 podman[75349]: 2025-12-07 09:39:28.979812077 +0000 UTC m=+6.337022745 container died 4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb (image=quay.io/ceph/ceph:v19, name=busy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:39:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:29 np0005549474 systemd[1]: var-lib-containers-storage-overlay-76e1aadc4f5e9ce0094ea6b472d0a1b9bd9792251aaba7d70851e2d43ecb3abd-merged.mount: Deactivated successfully.
Dec  7 04:39:29 np0005549474 podman[75349]: 2025-12-07 09:39:29.129867007 +0000 UTC m=+6.487077685 container remove 4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb (image=quay.io/ceph/ceph:v19, name=busy_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 04:39:29 np0005549474 systemd[1]: libpod-conmon-4db4e4e990620689d1f911ff04138599f97de3226d2362eea4c8f0a26bc891fb.scope: Deactivated successfully.
Dec  7 04:39:29 np0005549474 podman[75517]: 2025-12-07 09:39:29.177786278 +0000 UTC m=+0.028927729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:29 np0005549474 podman[75517]: 2025-12-07 09:39:29.584266408 +0000 UTC m=+0.435407879 container create 0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41 (image=quay.io/ceph/ceph:v19, name=cranky_cohen, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:39:29 np0005549474 systemd[1]: Started libpod-conmon-0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41.scope.
Dec  7 04:39:29 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153b38ef2e2f09e38d579e415688090721d3131b3c826bfd95fbe6ce7103ed35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153b38ef2e2f09e38d579e415688090721d3131b3c826bfd95fbe6ce7103ed35/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153b38ef2e2f09e38d579e415688090721d3131b3c826bfd95fbe6ce7103ed35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:29 np0005549474 podman[75517]: 2025-12-07 09:39:29.688514492 +0000 UTC m=+0.539655993 container init 0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41 (image=quay.io/ceph/ceph:v19, name=cranky_cohen, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:39:29 np0005549474 podman[75517]: 2025-12-07 09:39:29.700554391 +0000 UTC m=+0.551695862 container start 0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41 (image=quay.io/ceph/ceph:v19, name=cranky_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:29 np0005549474 podman[75517]: 2025-12-07 09:39:29.705873133 +0000 UTC m=+0.557014594 container attach 0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41 (image=quay.io/ceph/ceph:v19, name=cranky_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:39:29 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:29 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:29 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926028 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:39:30 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.dotugk(active, since 2s)
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 04:39:30 np0005549474 systemd[1]: libpod-0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41.scope: Deactivated successfully.
Dec  7 04:39:30 np0005549474 podman[75517]: 2025-12-07 09:39:30.119745739 +0000 UTC m=+0.970887180 container died 0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41 (image=quay.io/ceph/ceph:v19, name=cranky_cohen, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:30 np0005549474 systemd[1]: var-lib-containers-storage-overlay-153b38ef2e2f09e38d579e415688090721d3131b3c826bfd95fbe6ce7103ed35-merged.mount: Deactivated successfully.
Dec  7 04:39:30 np0005549474 podman[75517]: 2025-12-07 09:39:30.156492484 +0000 UTC m=+1.007633925 container remove 0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41 (image=quay.io/ceph/ceph:v19, name=cranky_cohen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:39:30 np0005549474 systemd[1]: libpod-conmon-0f4fa30f3a9f478a5c0c49a0125bb2569555b5bad59ec40252390ae63f345a41.scope: Deactivated successfully.
Dec  7 04:39:30 np0005549474 podman[75572]: 2025-12-07 09:39:30.220495951 +0000 UTC m=+0.041759038 container create 5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63 (image=quay.io/ceph/ceph:v19, name=great_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:39:30 np0005549474 systemd[1]: Started libpod-conmon-5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63.scope.
Dec  7 04:39:30 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935b13fcf28d43482b044068fc21ac30a49af5fd4f82b0e64d509520c27288b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935b13fcf28d43482b044068fc21ac30a49af5fd4f82b0e64d509520c27288b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935b13fcf28d43482b044068fc21ac30a49af5fd4f82b0e64d509520c27288b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:30 np0005549474 podman[75572]: 2025-12-07 09:39:30.296852046 +0000 UTC m=+0.118115143 container init 5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63 (image=quay.io/ceph/ceph:v19, name=great_faraday, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:39:30 np0005549474 podman[75572]: 2025-12-07 09:39:30.201165748 +0000 UTC m=+0.022428865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:30 np0005549474 podman[75572]: 2025-12-07 09:39:30.301386847 +0000 UTC m=+0.122649934 container start 5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63 (image=quay.io/ceph/ceph:v19, name=great_faraday, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:30 np0005549474 podman[75572]: 2025-12-07 09:39:30.30756555 +0000 UTC m=+0.128828657 container attach 5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63 (image=quay.io/ceph/ceph:v19, name=great_faraday, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 04:39:30 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:30 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Set ssh ssh_user
Dec  7 04:39:30 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  7 04:39:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:39:31] ENGINE Bus STARTING
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:39:31] ENGINE Bus STARTING
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:39:31] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:39:31] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:39:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Set ssh ssh_config
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  7 04:39:31 np0005549474 great_faraday[75588]: ssh user set to ceph-admin. sudo will be used
Dec  7 04:39:31 np0005549474 systemd[1]: libpod-5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63.scope: Deactivated successfully.
Dec  7 04:39:31 np0005549474 podman[75572]: 2025-12-07 09:39:31.21477631 +0000 UTC m=+1.036039427 container died 5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63 (image=quay.io/ceph/ceph:v19, name=great_faraday, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:39:31] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:39:31] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:39:31] ENGINE Bus STARTED
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:39:31] ENGINE Bus STARTED
Dec  7 04:39:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:39:31] ENGINE Client ('192.168.122.100', 33682) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:39:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:39:31] ENGINE Client ('192.168.122.100', 33682) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:39:31 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:31 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9935b13fcf28d43482b044068fc21ac30a49af5fd4f82b0e64d509520c27288b-merged.mount: Deactivated successfully.
Dec  7 04:39:31 np0005549474 podman[75572]: 2025-12-07 09:39:31.612685903 +0000 UTC m=+1.433949030 container remove 5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63 (image=quay.io/ceph/ceph:v19, name=great_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:31 np0005549474 podman[75649]: 2025-12-07 09:39:31.680329887 +0000 UTC m=+0.049206305 container create db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:39:31 np0005549474 systemd[1]: Started libpod-conmon-db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab.scope.
Dec  7 04:39:31 np0005549474 podman[75649]: 2025-12-07 09:39:31.656107485 +0000 UTC m=+0.024983883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395167cd5d6b4600eb54e10cb6b02830107ce536dbe34075c0652c273d8f35a4/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395167cd5d6b4600eb54e10cb6b02830107ce536dbe34075c0652c273d8f35a4/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395167cd5d6b4600eb54e10cb6b02830107ce536dbe34075c0652c273d8f35a4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395167cd5d6b4600eb54e10cb6b02830107ce536dbe34075c0652c273d8f35a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395167cd5d6b4600eb54e10cb6b02830107ce536dbe34075c0652c273d8f35a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:31 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:32 np0005549474 podman[75649]: 2025-12-07 09:39:32.163920973 +0000 UTC m=+0.532797361 container init db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:39:32 np0005549474 podman[75649]: 2025-12-07 09:39:32.173497706 +0000 UTC m=+0.542374084 container start db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:32 np0005549474 podman[75649]: 2025-12-07 09:39:32.176922177 +0000 UTC m=+0.545798555 container attach db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:39:32 np0005549474 systemd[1]: libpod-conmon-5abb72ff19c589901f3f8711f7cc17a8ada770d393c6003c4e025d2b8e6bfb63.scope: Deactivated successfully.
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: Set ssh ssh_user
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:39:31] ENGINE Bus STARTING
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:39:31] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: Set ssh ssh_config
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: ssh user set to ceph-admin. sudo will be used
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:39:31] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:39:31] ENGINE Bus STARTED
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:39:31] ENGINE Client ('192.168.122.100', 33682) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:39:32 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  7 04:39:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:32 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  7 04:39:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  7 04:39:32 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Set ssh private key
Dec  7 04:39:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  7 04:39:32 np0005549474 systemd[1]: libpod-db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab.scope: Deactivated successfully.
Dec  7 04:39:32 np0005549474 podman[75691]: 2025-12-07 09:39:32.607122937 +0000 UTC m=+0.028337313 container died db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:39:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-395167cd5d6b4600eb54e10cb6b02830107ce536dbe34075c0652c273d8f35a4-merged.mount: Deactivated successfully.
Dec  7 04:39:32 np0005549474 podman[75691]: 2025-12-07 09:39:32.672535381 +0000 UTC m=+0.093749757 container remove db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:32 np0005549474 systemd[1]: libpod-conmon-db274706d557a12132bd21091b8466e46e316cb7fc732c33f380eaf467b728ab.scope: Deactivated successfully.
Dec  7 04:39:32 np0005549474 podman[75706]: 2025-12-07 09:39:32.767372756 +0000 UTC m=+0.061461361 container create 105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49 (image=quay.io/ceph/ceph:v19, name=awesome_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:39:32 np0005549474 systemd[1]: Started libpod-conmon-105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49.scope.
Dec  7 04:39:32 np0005549474 podman[75706]: 2025-12-07 09:39:32.733978431 +0000 UTC m=+0.028067106 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c31a12dbd749a160cdb844f79d76bd2a7adf9c9e46a7ce04b08b922c1b1fc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c31a12dbd749a160cdb844f79d76bd2a7adf9c9e46a7ce04b08b922c1b1fc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c31a12dbd749a160cdb844f79d76bd2a7adf9c9e46a7ce04b08b922c1b1fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c31a12dbd749a160cdb844f79d76bd2a7adf9c9e46a7ce04b08b922c1b1fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c31a12dbd749a160cdb844f79d76bd2a7adf9c9e46a7ce04b08b922c1b1fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:32 np0005549474 podman[75706]: 2025-12-07 09:39:32.846460484 +0000 UTC m=+0.140549069 container init 105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49 (image=quay.io/ceph/ceph:v19, name=awesome_faraday, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 04:39:32 np0005549474 podman[75706]: 2025-12-07 09:39:32.859912491 +0000 UTC m=+0.154001046 container start 105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49 (image=quay.io/ceph/ceph:v19, name=awesome_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:32 np0005549474 podman[75706]: 2025-12-07 09:39:32.863868196 +0000 UTC m=+0.157956801 container attach 105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49 (image=quay.io/ceph/ceph:v19, name=awesome_faraday, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:33 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  7 04:39:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:33 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  7 04:39:33 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  7 04:39:33 np0005549474 systemd[1]: libpod-105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49.scope: Deactivated successfully.
Dec  7 04:39:33 np0005549474 podman[75706]: 2025-12-07 09:39:33.220150245 +0000 UTC m=+0.514238880 container died 105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49 (image=quay.io/ceph/ceph:v19, name=awesome_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:39:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-787c31a12dbd749a160cdb844f79d76bd2a7adf9c9e46a7ce04b08b922c1b1fc-merged.mount: Deactivated successfully.
Dec  7 04:39:33 np0005549474 podman[75706]: 2025-12-07 09:39:33.26635721 +0000 UTC m=+0.560445815 container remove 105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49 (image=quay.io/ceph/ceph:v19, name=awesome_faraday, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:33 np0005549474 systemd[1]: libpod-conmon-105962ae8379bc736c910131c231e8e2111781859e7f8d3430df2c103e8f5b49.scope: Deactivated successfully.
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.370593574 +0000 UTC m=+0.068041675 container create 18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04 (image=quay.io/ceph/ceph:v19, name=sweet_diffie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:33 np0005549474 systemd[1]: Started libpod-conmon-18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04.scope.
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.341878823 +0000 UTC m=+0.039326984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca9b519b193405a2688f18f2ca3ab4e994c115fc955fc320719599ab580b96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca9b519b193405a2688f18f2ca3ab4e994c115fc955fc320719599ab580b96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca9b519b193405a2688f18f2ca3ab4e994c115fc955fc320719599ab580b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.473127534 +0000 UTC m=+0.170575705 container init 18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04 (image=quay.io/ceph/ceph:v19, name=sweet_diffie, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.484569567 +0000 UTC m=+0.182017638 container start 18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04 (image=quay.io/ceph/ceph:v19, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.489164089 +0000 UTC m=+0.186612200 container attach 18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04 (image=quay.io/ceph/ceph:v19, name=sweet_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:33 np0005549474 ceph-mon[74516]: Set ssh ssh_identity_key
Dec  7 04:39:33 np0005549474 ceph-mon[74516]: Set ssh private key
Dec  7 04:39:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:33 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:33 np0005549474 sweet_diffie[75777]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV7qawPaiWuVcO40kNtmhkCIS0GCEv+SlMAAQmdSx6LBP+FPBGmyqGuWp3L82ZqYM4ceb5Mjo45bt7x0p+5ChZ62YyC0/Z0mnqTzI9g/p33jfnjcL9mF9Y7qdk2dzorfvxfZhhiw44ImuwVl3Uns3MD/M7GhMyIZ2LXcML/G73rVJL9BBCk6XECnEQwlyK1xjFqiBjNxQRpShhIhJU/JPdE96NHGCRjsSDwOtIhdFB6/A/ZVSzxMVzwztgcFS58Jj65+R42JrCmc7JjfvAsH+2DYMJbct2tgCMyzvppBBLeWYAlnEFU7hcQn4Hh21DzXRXh5F4sLF6OHOxcYAM43X9WcZ/4OpVHTALWxqX8rpQkD8txBYp+yGdzL0p8d88H0NPO31uXd2mcMhZVoBt7GTZ6Sh4yLGE288b8aQL0yQUOidSPqRX5t3ImV3NMeO/EVtl8lHYDMbPTB4Dw+qhbICUZiiF8vW7twtjqGfdvCYVj1ioJTfSpSSjaoKrLkGCfOU= zuul@controller
Dec  7 04:39:33 np0005549474 systemd[1]: libpod-18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04.scope: Deactivated successfully.
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.817170708 +0000 UTC m=+0.514618819 container died 18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04 (image=quay.io/ceph/ceph:v19, name=sweet_diffie, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:33 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-99ca9b519b193405a2688f18f2ca3ab4e994c115fc955fc320719599ab580b96-merged.mount: Deactivated successfully.
Dec  7 04:39:33 np0005549474 podman[75761]: 2025-12-07 09:39:33.869433395 +0000 UTC m=+0.566881506 container remove 18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04 (image=quay.io/ceph/ceph:v19, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:39:33 np0005549474 systemd[1]: libpod-conmon-18bc472d3a362e3110d63d95caf12059e923b31b06287fb0c4a99016370d2e04.scope: Deactivated successfully.
Dec  7 04:39:33 np0005549474 podman[75817]: 2025-12-07 09:39:33.96314657 +0000 UTC m=+0.062484568 container create c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d (image=quay.io/ceph/ceph:v19, name=angry_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 04:39:34 np0005549474 systemd[1]: Started libpod-conmon-c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d.scope.
Dec  7 04:39:34 np0005549474 podman[75817]: 2025-12-07 09:39:33.941770082 +0000 UTC m=+0.041108110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9d67a5ee1f810f5e551f724bcc0a75486f85f6426f96112c15b856feebc624/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9d67a5ee1f810f5e551f724bcc0a75486f85f6426f96112c15b856feebc624/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9d67a5ee1f810f5e551f724bcc0a75486f85f6426f96112c15b856feebc624/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:34 np0005549474 podman[75817]: 2025-12-07 09:39:34.058996701 +0000 UTC m=+0.158334669 container init c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d (image=quay.io/ceph/ceph:v19, name=angry_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:39:34 np0005549474 podman[75817]: 2025-12-07 09:39:34.065335609 +0000 UTC m=+0.164673577 container start c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d (image=quay.io/ceph/ceph:v19, name=angry_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:39:34 np0005549474 podman[75817]: 2025-12-07 09:39:34.069467789 +0000 UTC m=+0.168805777 container attach c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d (image=quay.io/ceph/ceph:v19, name=angry_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:34 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:34 np0005549474 ceph-mon[74516]: Set ssh ssh_identity_pub
Dec  7 04:39:34 np0005549474 systemd-logind[796]: New session 21 of user ceph-admin.
Dec  7 04:39:34 np0005549474 systemd[1]: Created slice User Slice of UID 42477.
Dec  7 04:39:34 np0005549474 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  7 04:39:34 np0005549474 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  7 04:39:34 np0005549474 systemd[1]: Starting User Manager for UID 42477...
Dec  7 04:39:34 np0005549474 systemd-logind[796]: New session 23 of user ceph-admin.
Dec  7 04:39:34 np0005549474 systemd[75863]: Queued start job for default target Main User Target.
Dec  7 04:39:34 np0005549474 systemd[75863]: Created slice User Application Slice.
Dec  7 04:39:34 np0005549474 systemd[75863]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 04:39:34 np0005549474 systemd[75863]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 04:39:34 np0005549474 systemd[75863]: Reached target Paths.
Dec  7 04:39:34 np0005549474 systemd[75863]: Reached target Timers.
Dec  7 04:39:34 np0005549474 systemd[75863]: Starting D-Bus User Message Bus Socket...
Dec  7 04:39:34 np0005549474 systemd[75863]: Starting Create User's Volatile Files and Directories...
Dec  7 04:39:34 np0005549474 systemd[75863]: Listening on D-Bus User Message Bus Socket.
Dec  7 04:39:34 np0005549474 systemd[75863]: Finished Create User's Volatile Files and Directories.
Dec  7 04:39:34 np0005549474 systemd[75863]: Reached target Sockets.
Dec  7 04:39:34 np0005549474 systemd[75863]: Reached target Basic System.
Dec  7 04:39:34 np0005549474 systemd[75863]: Reached target Main User Target.
Dec  7 04:39:34 np0005549474 systemd[75863]: Startup finished in 140ms.
Dec  7 04:39:34 np0005549474 systemd[1]: Started User Manager for UID 42477.
Dec  7 04:39:34 np0005549474 systemd[1]: Started Session 21 of User ceph-admin.
Dec  7 04:39:34 np0005549474 systemd[1]: Started Session 23 of User ceph-admin.
Dec  7 04:39:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053080 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:39:35 np0005549474 systemd-logind[796]: New session 24 of user ceph-admin.
Dec  7 04:39:35 np0005549474 systemd[1]: Started Session 24 of User ceph-admin.
Dec  7 04:39:35 np0005549474 systemd-logind[796]: New session 25 of user ceph-admin.
Dec  7 04:39:35 np0005549474 systemd[1]: Started Session 25 of User ceph-admin.
Dec  7 04:39:35 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:35 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  7 04:39:35 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  7 04:39:36 np0005549474 systemd-logind[796]: New session 26 of user ceph-admin.
Dec  7 04:39:36 np0005549474 systemd[1]: Started Session 26 of User ceph-admin.
Dec  7 04:39:36 np0005549474 systemd-logind[796]: New session 27 of user ceph-admin.
Dec  7 04:39:36 np0005549474 systemd[1]: Started Session 27 of User ceph-admin.
Dec  7 04:39:36 np0005549474 ceph-mon[74516]: Deploying cephadm binary to compute-0
Dec  7 04:39:36 np0005549474 systemd-logind[796]: New session 28 of user ceph-admin.
Dec  7 04:39:36 np0005549474 systemd[1]: Started Session 28 of User ceph-admin.
Dec  7 04:39:37 np0005549474 systemd-logind[796]: New session 29 of user ceph-admin.
Dec  7 04:39:37 np0005549474 systemd[1]: Started Session 29 of User ceph-admin.
Dec  7 04:39:37 np0005549474 systemd-logind[796]: New session 30 of user ceph-admin.
Dec  7 04:39:37 np0005549474 systemd[1]: Started Session 30 of User ceph-admin.
Dec  7 04:39:37 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:37 np0005549474 systemd-logind[796]: New session 31 of user ceph-admin.
Dec  7 04:39:37 np0005549474 systemd[1]: Started Session 31 of User ceph-admin.
Dec  7 04:39:39 np0005549474 systemd-logind[796]: New session 32 of user ceph-admin.
Dec  7 04:39:39 np0005549474 systemd[1]: Started Session 32 of User ceph-admin.
Dec  7 04:39:39 np0005549474 systemd-logind[796]: New session 33 of user ceph-admin.
Dec  7 04:39:39 np0005549474 systemd[1]: Started Session 33 of User ceph-admin.
Dec  7 04:39:39 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:39:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:39 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Added host compute-0
Dec  7 04:39:39 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  7 04:39:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 04:39:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 04:39:39 np0005549474 angry_buck[75833]: Added host 'compute-0' with addr '192.168.122.100'
Dec  7 04:39:39 np0005549474 systemd[1]: libpod-c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d.scope: Deactivated successfully.
Dec  7 04:39:39 np0005549474 podman[76226]: 2025-12-07 09:39:39.962689083 +0000 UTC m=+0.045756365 container died c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d (image=quay.io/ceph/ceph:v19, name=angry_buck, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-fe9d67a5ee1f810f5e551f724bcc0a75486f85f6426f96112c15b856feebc624-merged.mount: Deactivated successfully.
Dec  7 04:39:40 np0005549474 podman[76226]: 2025-12-07 09:39:40.006552146 +0000 UTC m=+0.089619338 container remove c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d (image=quay.io/ceph/ceph:v19, name=angry_buck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:39:40 np0005549474 systemd[1]: libpod-conmon-c60b25989024453056190e01e0f8de766e155d1c12ecb24147bf3675be29de4d.scope: Deactivated successfully.
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.118596477 +0000 UTC m=+0.069915075 container create 271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f (image=quay.io/ceph/ceph:v19, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:40 np0005549474 systemd[1]: Started libpod-conmon-271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f.scope.
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.089024044 +0000 UTC m=+0.040342642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1170b94783e7a76c6294415773a8fbc57d91d04279c42985eea61be4813b469/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1170b94783e7a76c6294415773a8fbc57d91d04279c42985eea61be4813b469/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1170b94783e7a76c6294415773a8fbc57d91d04279c42985eea61be4813b469/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.234556853 +0000 UTC m=+0.185875461 container init 271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f (image=quay.io/ceph/ceph:v19, name=epic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.246418108 +0000 UTC m=+0.197736666 container start 271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f (image=quay.io/ceph/ceph:v19, name=epic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.249720925 +0000 UTC m=+0.201039573 container attach 271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f (image=quay.io/ceph/ceph:v19, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 04:39:40 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:40 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  7 04:39:40 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  7 04:39:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 04:39:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:40 np0005549474 epic_banzai[76299]: Scheduled mon update...
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.645707967 +0000 UTC m=+0.597026575 container died 271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f (image=quay.io/ceph/ceph:v19, name=epic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:39:40 np0005549474 systemd[1]: libpod-271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f.scope: Deactivated successfully.
Dec  7 04:39:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c1170b94783e7a76c6294415773a8fbc57d91d04279c42985eea61be4813b469-merged.mount: Deactivated successfully.
Dec  7 04:39:40 np0005549474 podman[76281]: 2025-12-07 09:39:40.683324175 +0000 UTC m=+0.634642733 container remove 271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f (image=quay.io/ceph/ceph:v19, name=epic_banzai, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:39:40 np0005549474 systemd[1]: libpod-conmon-271ebff7c4541b028b747e4d116cc6b50ad38c2cecef78e7c46a39ae7424e12f.scope: Deactivated successfully.
Dec  7 04:39:40 np0005549474 podman[76362]: 2025-12-07 09:39:40.756637879 +0000 UTC m=+0.045151798 container create 2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed (image=quay.io/ceph/ceph:v19, name=quirky_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:39:40 np0005549474 systemd[1]: Started libpod-conmon-2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed.scope.
Dec  7 04:39:40 np0005549474 podman[76362]: 2025-12-07 09:39:40.736090144 +0000 UTC m=+0.024604163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49202a017d8670b789e032b15ad6d6e513952ace4039933a965da07a68d61811/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49202a017d8670b789e032b15ad6d6e513952ace4039933a965da07a68d61811/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49202a017d8670b789e032b15ad6d6e513952ace4039933a965da07a68d61811/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:40 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:40 np0005549474 ceph-mon[74516]: Added host compute-0
Dec  7 04:39:40 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:40 np0005549474 podman[76362]: 2025-12-07 09:39:40.867326905 +0000 UTC m=+0.155840854 container init 2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed (image=quay.io/ceph/ceph:v19, name=quirky_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:40 np0005549474 podman[76362]: 2025-12-07 09:39:40.875602344 +0000 UTC m=+0.164116303 container start 2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed (image=quay.io/ceph/ceph:v19, name=quirky_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:40 np0005549474 podman[76362]: 2025-12-07 09:39:40.879553759 +0000 UTC m=+0.168067698 container attach 2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed (image=quay.io/ceph/ceph:v19, name=quirky_thompson, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:40 np0005549474 podman[76316]: 2025-12-07 09:39:40.947566603 +0000 UTC m=+0.633288677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.062144561 +0000 UTC m=+0.037649369 container create c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361 (image=quay.io/ceph/ceph:v19, name=serene_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:39:41 np0005549474 systemd[1]: Started libpod-conmon-c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361.scope.
Dec  7 04:39:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.12356072 +0000 UTC m=+0.099065508 container init c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361 (image=quay.io/ceph/ceph:v19, name=serene_einstein, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.128629275 +0000 UTC m=+0.104134063 container start c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361 (image=quay.io/ceph/ceph:v19, name=serene_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.131903432 +0000 UTC m=+0.107408240 container attach c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361 (image=quay.io/ceph/ceph:v19, name=serene_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.044311509 +0000 UTC m=+0.019816337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  7 04:39:41 np0005549474 serene_einstein[76431]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 quirky_thompson[76379]: Scheduled mgr update...
Dec  7 04:39:41 np0005549474 systemd[1]: libpod-c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361.scope: Deactivated successfully.
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.245435092 +0000 UTC m=+0.220939880 container died c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361 (image=quay.io/ceph/ceph:v19, name=serene_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:41 np0005549474 systemd[1]: libpod-2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed.scope: Deactivated successfully.
Dec  7 04:39:41 np0005549474 podman[76362]: 2025-12-07 09:39:41.259410003 +0000 UTC m=+0.547923922 container died 2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed (image=quay.io/ceph/ceph:v19, name=quirky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:39:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1e3a076554c6ef181c89b580542b86153cc55ae17aa4964d97a74caf642fca25-merged.mount: Deactivated successfully.
Dec  7 04:39:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-49202a017d8670b789e032b15ad6d6e513952ace4039933a965da07a68d61811-merged.mount: Deactivated successfully.
Dec  7 04:39:41 np0005549474 podman[76415]: 2025-12-07 09:39:41.291826543 +0000 UTC m=+0.267331331 container remove c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361 (image=quay.io/ceph/ceph:v19, name=serene_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:39:41 np0005549474 podman[76362]: 2025-12-07 09:39:41.308090104 +0000 UTC m=+0.596604023 container remove 2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed (image=quay.io/ceph/ceph:v19, name=quirky_thompson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:41 np0005549474 systemd[1]: libpod-conmon-2d54c2e6400481421d3855bcea44d1d8b1836ad0899e1a043422048a8c974fed.scope: Deactivated successfully.
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  7 04:39:41 np0005549474 systemd[1]: libpod-conmon-c7251841b83ff46520b2ea08a3077ffa8067e2dffbc4976aa21331f348583361.scope: Deactivated successfully.
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.359902108 +0000 UTC m=+0.035080791 container create 2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308 (image=quay.io/ceph/ceph:v19, name=goofy_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:41 np0005549474 systemd[1]: Started libpod-conmon-2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308.scope.
Dec  7 04:39:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d33de38782afa1a1c5ec953e26c6b418c253fc618d1c744f1125da0d53b0d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d33de38782afa1a1c5ec953e26c6b418c253fc618d1c744f1125da0d53b0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d33de38782afa1a1c5ec953e26c6b418c253fc618d1c744f1125da0d53b0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.414813795 +0000 UTC m=+0.089992498 container init 2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308 (image=quay.io/ceph/ceph:v19, name=goofy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.420062274 +0000 UTC m=+0.095240957 container start 2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308 (image=quay.io/ceph/ceph:v19, name=goofy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.42332299 +0000 UTC m=+0.098501683 container attach 2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308 (image=quay.io/ceph/ceph:v19, name=goofy_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.345246449 +0000 UTC m=+0.020425152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service crash spec with placement *
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 goofy_dewdney[76504]: Scheduled crash update...
Dec  7 04:39:41 np0005549474 systemd[1]: libpod-2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308.scope: Deactivated successfully.
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.767531929 +0000 UTC m=+0.442710612 container died 2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308 (image=quay.io/ceph/ceph:v19, name=goofy_dewdney, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:39:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e17d33de38782afa1a1c5ec953e26c6b418c253fc618d1c744f1125da0d53b0d-merged.mount: Deactivated successfully.
Dec  7 04:39:41 np0005549474 podman[76463]: 2025-12-07 09:39:41.806616476 +0000 UTC m=+0.481795159 container remove 2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308 (image=quay.io/ceph/ceph:v19, name=goofy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:39:41 np0005549474 systemd[1]: libpod-conmon-2ed79b13405b572952f7d17374ec641273c18cddbf426ae01ea4e906b9c25308.scope: Deactivated successfully.
Dec  7 04:39:41 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: Saving service mon spec with placement count:5
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:41 np0005549474 podman[76633]: 2025-12-07 09:39:41.896240943 +0000 UTC m=+0.055688228 container create d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846 (image=quay.io/ceph/ceph:v19, name=festive_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:41 np0005549474 systemd[1]: Started libpod-conmon-d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846.scope.
Dec  7 04:39:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2def8e4a980110fbf748838129ccc1f731993af949ba54af20592e94e91d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2def8e4a980110fbf748838129ccc1f731993af949ba54af20592e94e91d9a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2def8e4a980110fbf748838129ccc1f731993af949ba54af20592e94e91d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:41 np0005549474 podman[76633]: 2025-12-07 09:39:41.96892961 +0000 UTC m=+0.128376915 container init d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846 (image=quay.io/ceph/ceph:v19, name=festive_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:39:41 np0005549474 podman[76633]: 2025-12-07 09:39:41.877251029 +0000 UTC m=+0.036698374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:41 np0005549474 podman[76633]: 2025-12-07 09:39:41.974337604 +0000 UTC m=+0.133784909 container start d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846 (image=quay.io/ceph/ceph:v19, name=festive_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:39:41 np0005549474 podman[76633]: 2025-12-07 09:39:41.977664582 +0000 UTC m=+0.137111897 container attach d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846 (image=quay.io/ceph/ceph:v19, name=festive_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:42 np0005549474 podman[76751]: 2025-12-07 09:39:42.292400759 +0000 UTC m=+0.045724164 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1607322334' entity='client.admin' 
Dec  7 04:39:42 np0005549474 systemd[1]: libpod-d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846.scope: Deactivated successfully.
Dec  7 04:39:42 np0005549474 podman[76633]: 2025-12-07 09:39:42.368709842 +0000 UTC m=+0.528157177 container died d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846 (image=quay.io/ceph/ceph:v19, name=festive_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:39:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2b2def8e4a980110fbf748838129ccc1f731993af949ba54af20592e94e91d9a-merged.mount: Deactivated successfully.
Dec  7 04:39:42 np0005549474 podman[76751]: 2025-12-07 09:39:42.397975039 +0000 UTC m=+0.151298414 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 04:39:42 np0005549474 podman[76633]: 2025-12-07 09:39:42.407542152 +0000 UTC m=+0.566989447 container remove d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846 (image=quay.io/ceph/ceph:v19, name=festive_allen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:39:42 np0005549474 systemd[1]: libpod-conmon-d9722b377533479b267df795e96c61d43810d9907b447a3f0ca58a0e03dc1846.scope: Deactivated successfully.
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.486455875 +0000 UTC m=+0.048335953 container create cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3 (image=quay.io/ceph/ceph:v19, name=gallant_kalam, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:42 np0005549474 systemd[1]: Started libpod-conmon-cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3.scope.
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:42 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd82f4c0e5fe507cbf4caf06027fd0ee0cc2b1e007ce6c588acbe0da6c722de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd82f4c0e5fe507cbf4caf06027fd0ee0cc2b1e007ce6c588acbe0da6c722de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd82f4c0e5fe507cbf4caf06027fd0ee0cc2b1e007ce6c588acbe0da6c722de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.564868395 +0000 UTC m=+0.126748503 container init cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3 (image=quay.io/ceph/ceph:v19, name=gallant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.47080044 +0000 UTC m=+0.032680528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.576435332 +0000 UTC m=+0.138315410 container start cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3 (image=quay.io/ceph/ceph:v19, name=gallant_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.579831582 +0000 UTC m=+0.141711700 container attach cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3 (image=quay.io/ceph/ceph:v19, name=gallant_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:39:42 np0005549474 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76911 (sysctl)
Dec  7 04:39:42 np0005549474 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: Saving service mgr spec with placement count:2
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: Saving service crash spec with placement *
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1607322334' entity='client.admin' 
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:42 np0005549474 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  7 04:39:42 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  7 04:39:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:42 np0005549474 systemd[1]: libpod-cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3.scope: Deactivated successfully.
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.936705846 +0000 UTC m=+0.498585934 container died cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3 (image=quay.io/ceph/ceph:v19, name=gallant_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:39:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-bcd82f4c0e5fe507cbf4caf06027fd0ee0cc2b1e007ce6c588acbe0da6c722de-merged.mount: Deactivated successfully.
Dec  7 04:39:42 np0005549474 podman[76796]: 2025-12-07 09:39:42.978931186 +0000 UTC m=+0.540811264 container remove cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3 (image=quay.io/ceph/ceph:v19, name=gallant_kalam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:39:42 np0005549474 systemd[1]: libpod-conmon-cdbf9cedfcd3879dc14511d331b9246679f44427c7e1cbbd450c17f3ea9046e3.scope: Deactivated successfully.
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.037017197 +0000 UTC m=+0.038974995 container create 08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e (image=quay.io/ceph/ceph:v19, name=serene_lichterman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 04:39:43 np0005549474 systemd[1]: Started libpod-conmon-08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e.scope.
Dec  7 04:39:43 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e351e72a77889cef79f886b499cc5cb284d3b8c0b8706feac12e6a64a005bd21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e351e72a77889cef79f886b499cc5cb284d3b8c0b8706feac12e6a64a005bd21/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e351e72a77889cef79f886b499cc5cb284d3b8c0b8706feac12e6a64a005bd21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.104674381 +0000 UTC m=+0.106632199 container init 08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e (image=quay.io/ceph/ceph:v19, name=serene_lichterman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.11214813 +0000 UTC m=+0.114105928 container start 08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e (image=quay.io/ceph/ceph:v19, name=serene_lichterman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.116236668 +0000 UTC m=+0.118194486 container attach 08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e (image=quay.io/ceph/ceph:v19, name=serene_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.020411716 +0000 UTC m=+0.022369534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:43 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Added label _admin to host compute-0
Dec  7 04:39:43 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  7 04:39:43 np0005549474 serene_lichterman[76951]: Added label _admin to host compute-0
Dec  7 04:39:43 np0005549474 systemd[1]: libpod-08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e.scope: Deactivated successfully.
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.487742291 +0000 UTC m=+0.489700089 container died 08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e (image=quay.io/ceph/ceph:v19, name=serene_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:43 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e351e72a77889cef79f886b499cc5cb284d3b8c0b8706feac12e6a64a005bd21-merged.mount: Deactivated successfully.
Dec  7 04:39:43 np0005549474 podman[76933]: 2025-12-07 09:39:43.520320255 +0000 UTC m=+0.522278053 container remove 08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e (image=quay.io/ceph/ceph:v19, name=serene_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:43 np0005549474 systemd[1]: libpod-conmon-08a358565804c65db3ccdf704b51da420220fcd2afb7aecdd089815da70d536e.scope: Deactivated successfully.
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:43 np0005549474 podman[77069]: 2025-12-07 09:39:43.579276379 +0000 UTC m=+0.034963739 container create ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f (image=quay.io/ceph/ceph:v19, name=admiring_tharp, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  7 04:39:43 np0005549474 systemd[1]: Started libpod-conmon-ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f.scope.
Dec  7 04:39:43 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d075bf56d5666f6181b3ad14857ce9f320ef8271cf312e12bc19d773bc04905b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d075bf56d5666f6181b3ad14857ce9f320ef8271cf312e12bc19d773bc04905b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d075bf56d5666f6181b3ad14857ce9f320ef8271cf312e12bc19d773bc04905b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:43 np0005549474 podman[77069]: 2025-12-07 09:39:43.563812918 +0000 UTC m=+0.019500298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:43 np0005549474 podman[77069]: 2025-12-07 09:39:43.672930812 +0000 UTC m=+0.128618222 container init ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f (image=quay.io/ceph/ceph:v19, name=admiring_tharp, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:39:43 np0005549474 podman[77069]: 2025-12-07 09:39:43.678268014 +0000 UTC m=+0.133955374 container start ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f (image=quay.io/ceph/ceph:v19, name=admiring_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:39:43 np0005549474 podman[77069]: 2025-12-07 09:39:43.68152374 +0000 UTC m=+0.137211120 container attach ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f (image=quay.io/ceph/ceph:v19, name=admiring_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:43 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:43 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.037249344 +0000 UTC m=+0.046659699 container create 70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kirch, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:44 np0005549474 systemd[1]: Started libpod-conmon-70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b.scope.
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.01145504 +0000 UTC m=+0.020865435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:39:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.120064551 +0000 UTC m=+0.129474926 container init 70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.125788643 +0000 UTC m=+0.135199008 container start 70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.128669739 +0000 UTC m=+0.138080114 container attach 70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 04:39:44 np0005549474 determined_kirch[77215]: 167 167
Dec  7 04:39:44 np0005549474 systemd[1]: libpod-70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b.scope: Deactivated successfully.
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.130298301 +0000 UTC m=+0.139708676 container died 70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 04:39:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  7 04:39:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1603585838' entity='client.admin' 
Dec  7 04:39:44 np0005549474 admiring_tharp[77110]: set mgr/dashboard/cluster/status
Dec  7 04:39:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2508a5e32555275d384bc69cde6e5e182d313e7e30015b67a77eff6ad48261ab-merged.mount: Deactivated successfully.
Dec  7 04:39:44 np0005549474 systemd[1]: libpod-ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f.scope: Deactivated successfully.
Dec  7 04:39:44 np0005549474 podman[77199]: 2025-12-07 09:39:44.275992455 +0000 UTC m=+0.285402810 container remove 70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:44 np0005549474 podman[77069]: 2025-12-07 09:39:44.286826753 +0000 UTC m=+0.742514113 container died ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f (image=quay.io/ceph/ceph:v19, name=admiring_tharp, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:39:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d075bf56d5666f6181b3ad14857ce9f320ef8271cf312e12bc19d773bc04905b-merged.mount: Deactivated successfully.
Dec  7 04:39:44 np0005549474 podman[77069]: 2025-12-07 09:39:44.399027588 +0000 UTC m=+0.854714948 container remove ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f (image=quay.io/ceph/ceph:v19, name=admiring_tharp, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:44 np0005549474 systemd[1]: libpod-conmon-ffd172bb576fbde780a07258f3dff404a480898f1168af066353c7e1c353ef8f.scope: Deactivated successfully.
Dec  7 04:39:44 np0005549474 systemd[1]: libpod-conmon-70957b0f6d0fdba674d0fc023237b738075dbf03981158fcfd58b844d1619e7b.scope: Deactivated successfully.
Dec  7 04:39:44 np0005549474 podman[77249]: 2025-12-07 09:39:44.634824762 +0000 UTC m=+0.084535743 container create ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:39:44 np0005549474 podman[77249]: 2025-12-07 09:39:44.574933624 +0000 UTC m=+0.024644625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:39:44 np0005549474 systemd[1]: Started libpod-conmon-ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4.scope.
Dec  7 04:39:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d68d36ead975d545a130ad17ae8761f3c58a030213adc0a6ddcf18ba9252/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d68d36ead975d545a130ad17ae8761f3c58a030213adc0a6ddcf18ba9252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d68d36ead975d545a130ad17ae8761f3c58a030213adc0a6ddcf18ba9252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d68d36ead975d545a130ad17ae8761f3c58a030213adc0a6ddcf18ba9252/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:44 np0005549474 podman[77249]: 2025-12-07 09:39:44.708370943 +0000 UTC m=+0.158082014 container init ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_herschel, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:39:44 np0005549474 podman[77249]: 2025-12-07 09:39:44.7259713 +0000 UTC m=+0.175682281 container start ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:39:44 np0005549474 podman[77249]: 2025-12-07 09:39:44.736298574 +0000 UTC m=+0.186009655 container attach ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:44 np0005549474 python3[77295]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:44 np0005549474 ceph-mon[74516]: Added label _admin to host compute-0
Dec  7 04:39:44 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1603585838' entity='client.admin' 
Dec  7 04:39:44 np0005549474 podman[77296]: 2025-12-07 09:39:44.936996026 +0000 UTC m=+0.045320663 container create 22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f (image=quay.io/ceph/ceph:v19, name=cool_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:44 np0005549474 systemd[1]: Started libpod-conmon-22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f.scope.
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:39:45 np0005549474 podman[77296]: 2025-12-07 09:39:44.922494672 +0000 UTC m=+0.030819329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:45 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:45 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d864929b6798f34bf79045b8a8fe375492160c6e819e8e3384780d843b7a6e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:45 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d864929b6798f34bf79045b8a8fe375492160c6e819e8e3384780d843b7a6e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:45 np0005549474 podman[77296]: 2025-12-07 09:39:45.185799265 +0000 UTC m=+0.294123922 container init 22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f (image=quay.io/ceph/ceph:v19, name=cool_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:39:45 np0005549474 podman[77296]: 2025-12-07 09:39:45.191041034 +0000 UTC m=+0.299365671 container start 22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f (image=quay.io/ceph/ceph:v19, name=cool_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:45 np0005549474 podman[77296]: 2025-12-07 09:39:45.19809857 +0000 UTC m=+0.306423267 container attach 22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f (image=quay.io/ceph/ceph:v19, name=cool_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]: [
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:    {
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "available": false,
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "being_replaced": false,
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "ceph_device_lvm": false,
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "lsm_data": {},
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "lvs": [],
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "path": "/dev/sr0",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "rejected_reasons": [
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "Has a FileSystem",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "Insufficient space (<5GB)"
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        ],
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        "sys_api": {
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "actuators": null,
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "device_nodes": [
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:                "sr0"
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            ],
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "devname": "sr0",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "human_readable_size": "482.00 KB",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "id_bus": "ata",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "model": "QEMU DVD-ROM",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "nr_requests": "2",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "parent": "/dev/sr0",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "partitions": {},
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "path": "/dev/sr0",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "removable": "1",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "rev": "2.5+",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "ro": "0",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "rotational": "1",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "sas_address": "",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "sas_device_handle": "",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "scheduler_mode": "mq-deadline",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "sectors": 0,
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "sectorsize": "2048",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "size": 493568.0,
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "support_discard": "2048",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "type": "disk",
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:            "vendor": "QEMU"
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:        }
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]:    }
Dec  7 04:39:45 np0005549474 hopeful_herschel[77265]: ]
Dec  7 04:39:45 np0005549474 systemd[1]: libpod-ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4.scope: Deactivated successfully.
Dec  7 04:39:45 np0005549474 podman[77249]: 2025-12-07 09:39:45.505806031 +0000 UTC m=+0.955517022 container died ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4191798248' entity='client.admin' 
Dec  7 04:39:45 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2ce7d68d36ead975d545a130ad17ae8761f3c58a030213adc0a6ddcf18ba9252-merged.mount: Deactivated successfully.
Dec  7 04:39:45 np0005549474 systemd[1]: libpod-22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f.scope: Deactivated successfully.
Dec  7 04:39:45 np0005549474 podman[77249]: 2025-12-07 09:39:45.556572628 +0000 UTC m=+1.006283629 container remove ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_herschel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:45 np0005549474 podman[77296]: 2025-12-07 09:39:45.557909434 +0000 UTC m=+0.666234071 container died 22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f (image=quay.io/ceph/ceph:v19, name=cool_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:45 np0005549474 systemd[1]: libpod-conmon-ce16b5f9486127bac72b7c1465c16f747fde68d9f92622435630d2d34abd29c4.scope: Deactivated successfully.
Dec  7 04:39:45 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3d864929b6798f34bf79045b8a8fe375492160c6e819e8e3384780d843b7a6e2-merged.mount: Deactivated successfully.
Dec  7 04:39:45 np0005549474 podman[77296]: 2025-12-07 09:39:45.593446046 +0000 UTC m=+0.701770683 container remove 22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f (image=quay.io/ceph/ceph:v19, name=cool_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:45 np0005549474 systemd[1]: libpod-conmon-22e51e7ae167e5c2971ae5159215c08b36b39d3cfbb0257634ed6c6070d58e1f.scope: Deactivated successfully.
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:39:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:45 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:39:45 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:39:45 np0005549474 ceph-mgr[74811]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 04:39:46 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:39:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/4191798248' entity='client.admin' 
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:46 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:39:46 np0005549474 ansible-async_wrapper.py[78973]: Invoked with j565468708247 30 /home/zuul/.ansible/tmp/ansible-tmp-1765100385.999788-37076-176312537218089/AnsiballZ_command.py _
Dec  7 04:39:46 np0005549474 ansible-async_wrapper.py[79026]: Starting module and watcher
Dec  7 04:39:46 np0005549474 ansible-async_wrapper.py[79026]: Start watching 79029 (30)
Dec  7 04:39:46 np0005549474 ansible-async_wrapper.py[79029]: Start module (79029)
Dec  7 04:39:46 np0005549474 ansible-async_wrapper.py[78973]: Return async_wrapper task started.
Dec  7 04:39:46 np0005549474 python3[79033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:46 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:39:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:39:46 np0005549474 podman[79101]: 2025-12-07 09:39:46.889337964 +0000 UTC m=+0.054520637 container create 8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62 (image=quay.io/ceph/ceph:v19, name=boring_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:39:46 np0005549474 systemd[1]: Started libpod-conmon-8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62.scope.
Dec  7 04:39:46 np0005549474 podman[79101]: 2025-12-07 09:39:46.860598842 +0000 UTC m=+0.025781565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:46 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b011a02e1130ec37a01007d22566d417957903f6849b09d97f9cff1d99835e7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b011a02e1130ec37a01007d22566d417957903f6849b09d97f9cff1d99835e7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:46 np0005549474 podman[79101]: 2025-12-07 09:39:46.981118188 +0000 UTC m=+0.146300911 container init 8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62 (image=quay.io/ceph/ceph:v19, name=boring_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:39:46 np0005549474 podman[79101]: 2025-12-07 09:39:46.989668155 +0000 UTC m=+0.154850828 container start 8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62 (image=quay.io/ceph/ceph:v19, name=boring_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:46 np0005549474 podman[79101]: 2025-12-07 09:39:46.994061882 +0000 UTC m=+0.159244555 container attach 8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62 (image=quay.io/ceph/ceph:v19, name=boring_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 04:39:47 np0005549474 boring_borg[79144]: 
Dec  7 04:39:47 np0005549474 boring_borg[79144]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 04:39:47 np0005549474 systemd[1]: libpod-8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62.scope: Deactivated successfully.
Dec  7 04:39:47 np0005549474 podman[79101]: 2025-12-07 09:39:47.356055742 +0000 UTC m=+0.521238385 container died 8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62 (image=quay.io/ceph/ceph:v19, name=boring_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 04:39:47 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b011a02e1130ec37a01007d22566d417957903f6849b09d97f9cff1d99835e7b-merged.mount: Deactivated successfully.
Dec  7 04:39:47 np0005549474 podman[79101]: 2025-12-07 09:39:47.395412006 +0000 UTC m=+0.560594639 container remove 8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62 (image=quay.io/ceph/ceph:v19, name=boring_borg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:39:47 np0005549474 systemd[1]: libpod-conmon-8a9cc579a9069e09e07567c11b620d9e58e9646d1ec9ec459eccc0c6da240d62.scope: Deactivated successfully.
Dec  7 04:39:47 np0005549474 ansible-async_wrapper.py[79029]: Module complete (79029)
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 8b8e2b49-d70b-4a9c-befd-576f073ab1be (Updating crash deployment (+1 -> 1))
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  7 04:39:47 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.526832662 +0000 UTC m=+0.049391231 container create 03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hoover, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 04:39:48 np0005549474 ceph-mon[74516]: Deploying daemon crash.compute-0 on compute-0
Dec  7 04:39:48 np0005549474 systemd[1]: Started libpod-conmon-03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a.scope.
Dec  7 04:39:48 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.509070121 +0000 UTC m=+0.031628720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.613899741 +0000 UTC m=+0.136458410 container init 03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.621643337 +0000 UTC m=+0.144201936 container start 03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:48 np0005549474 determined_hoover[79708]: 167 167
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.626281319 +0000 UTC m=+0.148839928 container attach 03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:48 np0005549474 systemd[1]: libpod-03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a.scope: Deactivated successfully.
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.627249036 +0000 UTC m=+0.149807625 container died 03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hoover, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:39:48 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5a722ed8378e7118cbe35e2a8e9eaac03bea6d32ea3e850557241aa2d8b0ad54-merged.mount: Deactivated successfully.
Dec  7 04:39:48 np0005549474 podman[79692]: 2025-12-07 09:39:48.673022579 +0000 UTC m=+0.195581148 container remove 03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:39:48 np0005549474 systemd[1]: libpod-conmon-03c2378f7134631cd98ac51a72653013618666e46c829e82e65a09a4d58ac79a.scope: Deactivated successfully.
Dec  7 04:39:48 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:48 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:48 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:49 np0005549474 systemd[1]: Reloading.
Dec  7 04:39:49 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:39:49 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:39:49 np0005549474 systemd[1]: Starting Ceph crash.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:39:49 np0005549474 python3[79851]: ansible-ansible.legacy.async_status Invoked with jid=j565468708247.78973 mode=status _async_dir=/root/.ansible_async
Dec  7 04:39:49 np0005549474 podman[79941]: 2025-12-07 09:39:49.643075666 +0000 UTC m=+0.049823663 container create 3282b59c6d2ba5bf4c35d15661b25863269b2f188083fc6d1403f01fbaeb87a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1bae37d3d432b09c2a194eb6b67ed182217349068fe13e357495cd84db1cb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1bae37d3d432b09c2a194eb6b67ed182217349068fe13e357495cd84db1cb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1bae37d3d432b09c2a194eb6b67ed182217349068fe13e357495cd84db1cb1/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1bae37d3d432b09c2a194eb6b67ed182217349068fe13e357495cd84db1cb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:49 np0005549474 podman[79941]: 2025-12-07 09:39:49.618590996 +0000 UTC m=+0.025338983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:39:49 np0005549474 python3[79961]: ansible-ansible.legacy.async_status Invoked with jid=j565468708247.78973 mode=cleanup _async_dir=/root/.ansible_async
Dec  7 04:39:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:39:49 np0005549474 podman[79941]: 2025-12-07 09:39:49.841770515 +0000 UTC m=+0.248518572 container init 3282b59c6d2ba5bf4c35d15661b25863269b2f188083fc6d1403f01fbaeb87a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:49 np0005549474 podman[79941]: 2025-12-07 09:39:49.850380354 +0000 UTC m=+0.257128361 container start 3282b59c6d2ba5bf4c35d15661b25863269b2f188083fc6d1403f01fbaeb87a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:39:49 np0005549474 bash[79941]: 3282b59c6d2ba5bf4c35d15661b25863269b2f188083fc6d1403f01fbaeb87a0
Dec  7 04:39:49 np0005549474 systemd[1]: Started Ceph crash.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:39:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:49 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 8b8e2b49-d70b-4a9c-befd-576f073ab1be (Updating crash deployment (+1 -> 1))
Dec  7 04:39:49 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 8b8e2b49-d70b-4a9c-befd-576f073ab1be (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 04:39:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: 2025-12-07T09:39:50.013+0000 7f532b3cd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: 2025-12-07T09:39:50.013+0000 7f532b3cd640 -1 AuthRegistry(0x7f5324069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: 2025-12-07T09:39:50.014+0000 7f532b3cd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: 2025-12-07T09:39:50.014+0000 7f532b3cd640 -1 AuthRegistry(0x7f532b3cbff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: 2025-12-07T09:39:50.015+0000 7f5329142640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: 2025-12-07T09:39:50.015+0000 7f532b3cd640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  7 04:39:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  7 04:39:50 np0005549474 python3[80080]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 04:39:50 np0005549474 podman[80156]: 2025-12-07 09:39:50.648221933 +0000 UTC m=+0.061742078 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 04:39:50 np0005549474 podman[80156]: 2025-12-07 09:39:50.764747044 +0000 UTC m=+0.178267189 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:50 np0005549474 python3[80202]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:50 np0005549474 podman[80218]: 2025-12-07 09:39:50.863869943 +0000 UTC m=+0.038441491 container create 4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817 (image=quay.io/ceph/ceph:v19, name=relaxed_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:50 np0005549474 systemd[1]: Started libpod-conmon-4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817.scope.
Dec  7 04:39:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1635743ff0e9fdfa66d350cb0242de1e48c2b8a53cd5940c4ae5cea5db23352f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1635743ff0e9fdfa66d350cb0242de1e48c2b8a53cd5940c4ae5cea5db23352f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1635743ff0e9fdfa66d350cb0242de1e48c2b8a53cd5940c4ae5cea5db23352f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:50 np0005549474 podman[80218]: 2025-12-07 09:39:50.846080761 +0000 UTC m=+0.020652329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:50 np0005549474 podman[80218]: 2025-12-07 09:39:50.952599026 +0000 UTC m=+0.127170584 container init 4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817 (image=quay.io/ceph/ceph:v19, name=relaxed_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 04:39:50 np0005549474 podman[80218]: 2025-12-07 09:39:50.960913867 +0000 UTC m=+0.135485415 container start 4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817 (image=quay.io/ceph/ceph:v19, name=relaxed_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 04:39:50 np0005549474 podman[80218]: 2025-12-07 09:39:50.970553472 +0000 UTC m=+0.145125020 container attach 4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817 (image=quay.io/ceph/ceph:v19, name=relaxed_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 04:39:51 np0005549474 relaxed_allen[80251]: 
Dec  7 04:39:51 np0005549474 relaxed_allen[80251]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 04:39:51 np0005549474 systemd[1]: libpod-4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817.scope: Deactivated successfully.
Dec  7 04:39:51 np0005549474 podman[80218]: 2025-12-07 09:39:51.335805149 +0000 UTC m=+0.510376717 container died 4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817 (image=quay.io/ceph/ceph:v19, name=relaxed_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 04:39:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1635743ff0e9fdfa66d350cb0242de1e48c2b8a53cd5940c4ae5cea5db23352f-merged.mount: Deactivated successfully.
Dec  7 04:39:51 np0005549474 podman[80218]: 2025-12-07 09:39:51.384706886 +0000 UTC m=+0.559278434 container remove 4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817 (image=quay.io/ceph/ceph:v19, name=relaxed_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:51 np0005549474 systemd[1]: libpod-conmon-4b4fcb8c63eb59c2cc0bb920a10c78769f762fe1094bcff82260dc2cb025d817.scope: Deactivated successfully.
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.560449528 +0000 UTC m=+0.042319945 container create 9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264 (image=quay.io/ceph/ceph:v19, name=stupefied_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:39:51 np0005549474 systemd[1]: Started libpod-conmon-9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264.scope.
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.537643652 +0000 UTC m=+0.019514089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:51 np0005549474 ansible-async_wrapper.py[79026]: Done in kid B.
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.672487379 +0000 UTC m=+0.154357846 container init 9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264 (image=quay.io/ceph/ceph:v19, name=stupefied_payne, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.683616984 +0000 UTC m=+0.165487391 container start 9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264 (image=quay.io/ceph/ceph:v19, name=stupefied_payne, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.687875047 +0000 UTC m=+0.169745544 container attach 9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264 (image=quay.io/ceph/ceph:v19, name=stupefied_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:51 np0005549474 stupefied_payne[80408]: 167 167
Dec  7 04:39:51 np0005549474 systemd[1]: libpod-9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264.scope: Deactivated successfully.
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.692762957 +0000 UTC m=+0.174633374 container died 9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264 (image=quay.io/ceph/ceph:v19, name=stupefied_payne, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2f6a07a40b38d90f7d3f9d5df03421216b2180cab6567e557c051394d3e42768-merged.mount: Deactivated successfully.
Dec  7 04:39:51 np0005549474 podman[80392]: 2025-12-07 09:39:51.739724962 +0000 UTC m=+0.221595369 container remove 9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264 (image=quay.io/ceph/ceph:v19, name=stupefied_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:51 np0005549474 systemd[1]: libpod-conmon-9a5278e4dc5f95e99782ca4ac6805eeced21538d0b50fc6617bce93f5a48e264.scope: Deactivated successfully.
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dotugk (unknown last config time)...
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dotugk (unknown last config time)...
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec  7 04:39:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:39:51 np0005549474 python3[80445]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:51 np0005549474 podman[80475]: 2025-12-07 09:39:51.953138932 +0000 UTC m=+0.072061122 container create 9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f (image=quay.io/ceph/ceph:v19, name=sleepy_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:39:51 np0005549474 systemd[1]: Started libpod-conmon-9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f.scope.
Dec  7 04:39:51 np0005549474 podman[80475]: 2025-12-07 09:39:51.901890412 +0000 UTC m=+0.020812622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:39:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a431a37daa4c4591dfbbe2df7e53d608f75f8d6aa5450415f9d909dbd83f38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a431a37daa4c4591dfbbe2df7e53d608f75f8d6aa5450415f9d909dbd83f38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a431a37daa4c4591dfbbe2df7e53d608f75f8d6aa5450415f9d909dbd83f38/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:52 np0005549474 podman[80475]: 2025-12-07 09:39:52.054175141 +0000 UTC m=+0.173097421 container init 9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f (image=quay.io/ceph/ceph:v19, name=sleepy_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:52 np0005549474 podman[80475]: 2025-12-07 09:39:52.062972134 +0000 UTC m=+0.181894324 container start 9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f (image=quay.io/ceph/ceph:v19, name=sleepy_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:52 np0005549474 podman[80475]: 2025-12-07 09:39:52.066718863 +0000 UTC m=+0.185641093 container attach 9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f (image=quay.io/ceph/ceph:v19, name=sleepy_heyrovsky, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.317144555 +0000 UTC m=+0.057401124 container create 73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd (image=quay.io/ceph/ceph:v19, name=suspicious_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:39:52 np0005549474 systemd[1]: Started libpod-conmon-73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd.scope.
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.290784945 +0000 UTC m=+0.031041584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.410334256 +0000 UTC m=+0.150590905 container init 73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd (image=quay.io/ceph/ceph:v19, name=suspicious_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/766451667' entity='client.admin' 
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.420944538 +0000 UTC m=+0.161201097 container start 73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd (image=quay.io/ceph/ceph:v19, name=suspicious_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:39:52 np0005549474 suspicious_grothendieck[80573]: 167 167
Dec  7 04:39:52 np0005549474 systemd[1]: libpod-73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd.scope: Deactivated successfully.
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.425169769 +0000 UTC m=+0.165426348 container attach 73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd (image=quay.io/ceph/ceph:v19, name=suspicious_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.425699823 +0000 UTC m=+0.165956382 container died 73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd (image=quay.io/ceph/ceph:v19, name=suspicious_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:39:52 np0005549474 systemd[1]: libpod-9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f.scope: Deactivated successfully.
Dec  7 04:39:52 np0005549474 podman[80475]: 2025-12-07 09:39:52.439175841 +0000 UTC m=+0.558098041 container died 9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f (image=quay.io/ceph/ceph:v19, name=sleepy_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:52 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3bf70b23ee61d5f73ee5298fb5ce79195483544f179e2e0a7005340a774cd302-merged.mount: Deactivated successfully.
Dec  7 04:39:52 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e1a431a37daa4c4591dfbbe2df7e53d608f75f8d6aa5450415f9d909dbd83f38-merged.mount: Deactivated successfully.
Dec  7 04:39:52 np0005549474 podman[80556]: 2025-12-07 09:39:52.473757529 +0000 UTC m=+0.214014088 container remove 73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd (image=quay.io/ceph/ceph:v19, name=suspicious_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:39:52 np0005549474 podman[80475]: 2025-12-07 09:39:52.489052354 +0000 UTC m=+0.607974544 container remove 9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f (image=quay.io/ceph/ceph:v19, name=sleepy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:39:52 np0005549474 systemd[1]: libpod-conmon-73c0dfd3305d483b8a3bde47bfd322c6e59adf6a8f77c11458d4e455ccdcdbdd.scope: Deactivated successfully.
Dec  7 04:39:52 np0005549474 systemd[1]: libpod-conmon-9b3801aff80b1fe399df7820252045bb73c8573999cc34ed6eee2dab0787351f.scope: Deactivated successfully.
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 python3[80654]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:52 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 1 completed events
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:39:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:52 np0005549474 podman[80655]: 2025-12-07 09:39:52.935164965 +0000 UTC m=+0.054283570 container create 5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62 (image=quay.io/ceph/ceph:v19, name=beautiful_visvesvaraya, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 04:39:52 np0005549474 systemd[1]: Started libpod-conmon-5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62.scope.
Dec  7 04:39:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493a2cbb57d200d6cc90ec8eb87e4bee445a3216931019c615eeab3528a9c761/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493a2cbb57d200d6cc90ec8eb87e4bee445a3216931019c615eeab3528a9c761/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493a2cbb57d200d6cc90ec8eb87e4bee445a3216931019c615eeab3528a9c761/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: Reconfiguring mgr.compute-0.dotugk (unknown last config time)...
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/766451667' entity='client.admin' 
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:53 np0005549474 podman[80655]: 2025-12-07 09:39:52.915681708 +0000 UTC m=+0.034800313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:53 np0005549474 podman[80655]: 2025-12-07 09:39:53.02207966 +0000 UTC m=+0.141198265 container init 5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62 (image=quay.io/ceph/ceph:v19, name=beautiful_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:53 np0005549474 podman[80655]: 2025-12-07 09:39:53.032004384 +0000 UTC m=+0.151123009 container start 5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62 (image=quay.io/ceph/ceph:v19, name=beautiful_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:53 np0005549474 podman[80655]: 2025-12-07 09:39:53.035454995 +0000 UTC m=+0.154573590 container attach 5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62 (image=quay.io/ceph/ceph:v19, name=beautiful_visvesvaraya, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2616469226' entity='client.admin' 
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:39:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:53 np0005549474 systemd[1]: libpod-5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62.scope: Deactivated successfully.
Dec  7 04:39:53 np0005549474 podman[80655]: 2025-12-07 09:39:53.430907212 +0000 UTC m=+0.550025787 container died 5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62 (image=quay.io/ceph/ceph:v19, name=beautiful_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:39:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-493a2cbb57d200d6cc90ec8eb87e4bee445a3216931019c615eeab3528a9c761-merged.mount: Deactivated successfully.
Dec  7 04:39:53 np0005549474 podman[80655]: 2025-12-07 09:39:53.477387916 +0000 UTC m=+0.596506501 container remove 5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62 (image=quay.io/ceph/ceph:v19, name=beautiful_visvesvaraya, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:39:53 np0005549474 systemd[1]: libpod-conmon-5e9b9ee482d2bb93fda9bd62f1c13ac901a9cce5c684bcf09cb8638d2b5b4a62.scope: Deactivated successfully.
Dec  7 04:39:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:39:53 np0005549474 python3[80757]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:53 np0005549474 podman[80758]: 2025-12-07 09:39:53.921089163 +0000 UTC m=+0.058615496 container create 8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec (image=quay.io/ceph/ceph:v19, name=trusting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:39:53 np0005549474 systemd[1]: Started libpod-conmon-8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec.scope.
Dec  7 04:39:53 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:53 np0005549474 podman[80758]: 2025-12-07 09:39:53.896321315 +0000 UTC m=+0.033847708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1246714be20ebdf023b40ace5112433276e1c13fab53a42bb2220ff90cadc9b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1246714be20ebdf023b40ace5112433276e1c13fab53a42bb2220ff90cadc9b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1246714be20ebdf023b40ace5112433276e1c13fab53a42bb2220ff90cadc9b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:54 np0005549474 podman[80758]: 2025-12-07 09:39:54.009703763 +0000 UTC m=+0.147230136 container init 8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec (image=quay.io/ceph/ceph:v19, name=trusting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:54 np0005549474 podman[80758]: 2025-12-07 09:39:54.019669117 +0000 UTC m=+0.157195490 container start 8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec (image=quay.io/ceph/ceph:v19, name=trusting_babbage, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:39:54 np0005549474 podman[80758]: 2025-12-07 09:39:54.023348975 +0000 UTC m=+0.160875328 container attach 8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec (image=quay.io/ceph/ceph:v19, name=trusting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3733925307' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2616469226' entity='client.admin' 
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3733925307' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3733925307' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  7 04:39:54 np0005549474 trusting_babbage[80774]: set require_min_compat_client to mimic
Dec  7 04:39:54 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  7 04:39:54 np0005549474 systemd[1]: libpod-8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec.scope: Deactivated successfully.
Dec  7 04:39:54 np0005549474 podman[80758]: 2025-12-07 09:39:54.44660159 +0000 UTC m=+0.584127923 container died 8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec (image=quay.io/ceph/ceph:v19, name=trusting_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:39:54 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b1246714be20ebdf023b40ace5112433276e1c13fab53a42bb2220ff90cadc9b-merged.mount: Deactivated successfully.
Dec  7 04:39:54 np0005549474 podman[80758]: 2025-12-07 09:39:54.497462789 +0000 UTC m=+0.634989122 container remove 8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec (image=quay.io/ceph/ceph:v19, name=trusting_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 04:39:54 np0005549474 systemd[1]: libpod-conmon-8352ad0f8dad8848ec78e9694c425bb8ecbbe08fc4659f94df2bc7b28763ebec.scope: Deactivated successfully.
Dec  7 04:39:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:39:55 np0005549474 python3[80836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:39:55 np0005549474 podman[80837]: 2025-12-07 09:39:55.283142165 +0000 UTC m=+0.071459696 container create 90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff (image=quay.io/ceph/ceph:v19, name=jovial_lamport, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:39:55 np0005549474 systemd[1]: Started libpod-conmon-90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff.scope.
Dec  7 04:39:55 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:39:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de377f665f5f2de357014a388b5bea98213fddb5c43e389b30dff9637b90a4a1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de377f665f5f2de357014a388b5bea98213fddb5c43e389b30dff9637b90a4a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de377f665f5f2de357014a388b5bea98213fddb5c43e389b30dff9637b90a4a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:39:55 np0005549474 podman[80837]: 2025-12-07 09:39:55.249894404 +0000 UTC m=+0.038211995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:39:55 np0005549474 podman[80837]: 2025-12-07 09:39:55.347895443 +0000 UTC m=+0.136213014 container init 90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff (image=quay.io/ceph/ceph:v19, name=jovial_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:39:55 np0005549474 podman[80837]: 2025-12-07 09:39:55.355215407 +0000 UTC m=+0.143532908 container start 90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff (image=quay.io/ceph/ceph:v19, name=jovial_lamport, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:39:55 np0005549474 podman[80837]: 2025-12-07 09:39:55.358829882 +0000 UTC m=+0.147147423 container attach 90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff (image=quay.io/ceph/ceph:v19, name=jovial_lamport, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:39:55 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3733925307' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  7 04:39:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:39:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Added host compute-0
Dec  7 04:39:56 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:39:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec  7 04:39:57 np0005549474 ceph-mon[74516]: Added host compute-0
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:39:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:39:58 np0005549474 ceph-mon[74516]: Deploying cephadm binary to compute-1
Dec  7 04:39:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:40:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:01 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Added host compute-1
Dec  7 04:40:01 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Added host compute-1
Dec  7 04:40:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:02 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:02 np0005549474 ceph-mon[74516]: Added host compute-1
Dec  7 04:40:02 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:02 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:02 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec  7 04:40:02 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec  7 04:40:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:03 np0005549474 ceph-mon[74516]: Deploying cephadm binary to compute-2
Dec  7 04:40:03 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Added host compute-2
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Added host compute-2
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 jovial_lamport[80851]: Added host 'compute-0' with addr '192.168.122.100'
Dec  7 04:40:06 np0005549474 jovial_lamport[80851]: Added host 'compute-1' with addr '192.168.122.101'
Dec  7 04:40:06 np0005549474 jovial_lamport[80851]: Added host 'compute-2' with addr '192.168.122.102'
Dec  7 04:40:06 np0005549474 jovial_lamport[80851]: Scheduled mon update...
Dec  7 04:40:06 np0005549474 jovial_lamport[80851]: Scheduled mgr update...
Dec  7 04:40:06 np0005549474 jovial_lamport[80851]: Scheduled osd.default_drive_group update...
Dec  7 04:40:06 np0005549474 systemd[1]: libpod-90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff.scope: Deactivated successfully.
Dec  7 04:40:06 np0005549474 podman[80837]: 2025-12-07 09:40:06.578714956 +0000 UTC m=+11.367032517 container died 90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff (image=quay.io/ceph/ceph:v19, name=jovial_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:40:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-de377f665f5f2de357014a388b5bea98213fddb5c43e389b30dff9637b90a4a1-merged.mount: Deactivated successfully.
Dec  7 04:40:06 np0005549474 podman[80837]: 2025-12-07 09:40:06.62072251 +0000 UTC m=+11.409040011 container remove 90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff (image=quay.io/ceph/ceph:v19, name=jovial_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:06 np0005549474 systemd[1]: libpod-conmon-90bed1ff05c594fcdd4dbe015642fc8b40a59e26dc9d2cb1968593d910917bff.scope: Deactivated successfully.
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:06 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:07 np0005549474 python3[81008]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:40:07 np0005549474 podman[81010]: 2025-12-07 09:40:07.09510345 +0000 UTC m=+0.062940941 container create f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251 (image=quay.io/ceph/ceph:v19, name=vigorous_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:40:07 np0005549474 systemd[1]: Started libpod-conmon-f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251.scope.
Dec  7 04:40:07 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:07 np0005549474 podman[81010]: 2025-12-07 09:40:07.053434955 +0000 UTC m=+0.021272476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:40:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9135d0a404febc29d0306f046a913ff1c67d3398470e900787a0f594152d2063/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9135d0a404febc29d0306f046a913ff1c67d3398470e900787a0f594152d2063/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9135d0a404febc29d0306f046a913ff1c67d3398470e900787a0f594152d2063/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:07 np0005549474 podman[81010]: 2025-12-07 09:40:07.213763787 +0000 UTC m=+0.181601288 container init f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251 (image=quay.io/ceph/ceph:v19, name=vigorous_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:40:07 np0005549474 podman[81010]: 2025-12-07 09:40:07.220567758 +0000 UTC m=+0.188405249 container start f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251 (image=quay.io/ceph/ceph:v19, name=vigorous_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:07 np0005549474 podman[81010]: 2025-12-07 09:40:07.223060534 +0000 UTC m=+0.190898055 container attach f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251 (image=quay.io/ceph/ceph:v19, name=vigorous_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:40:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 04:40:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2756179200' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 04:40:07 np0005549474 vigorous_taussig[81026]: 
Dec  7 04:40:07 np0005549474 vigorous_taussig[81026]: {"fsid":"75f4c9fd-539a-5e17-b55a-0a12a4e2736c","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":57,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-07T09:39:07:860179+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-07T09:39:07.862134+0000","services":{}},"progress_events":{}}
Dec  7 04:40:07 np0005549474 systemd[1]: libpod-f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251.scope: Deactivated successfully.
Dec  7 04:40:07 np0005549474 podman[81051]: 2025-12-07 09:40:07.678497352 +0000 UTC m=+0.028131447 container died f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251 (image=quay.io/ceph/ceph:v19, name=vigorous_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:40:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9135d0a404febc29d0306f046a913ff1c67d3398470e900787a0f594152d2063-merged.mount: Deactivated successfully.
Dec  7 04:40:07 np0005549474 podman[81051]: 2025-12-07 09:40:07.729041712 +0000 UTC m=+0.078675787 container remove f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251 (image=quay.io/ceph/ceph:v19, name=vigorous_taussig, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:40:07 np0005549474 systemd[1]: libpod-conmon-f825832b331b91db077f36eb0a54e860c41a417fb647df7a7fbe1867f9ef4251.scope: Deactivated successfully.
Dec  7 04:40:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:08 np0005549474 ceph-mon[74516]: Added host compute-2
Dec  7 04:40:08 np0005549474 ceph-mon[74516]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:08 np0005549474 ceph-mon[74516]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:08 np0005549474 ceph-mon[74516]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  7 04:40:08 np0005549474 ceph-mon[74516]: Marking host: compute-1 for OSDSpec preview refresh.
Dec  7 04:40:08 np0005549474 ceph-mon[74516]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  7 04:40:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:40:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:40:23 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:40:23 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:40:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:24 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:40:24 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:40:24 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:40:24 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:40:24 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:40:25 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:40:25 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:40:25 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:40:25 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:40:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 35d14aff-1000-4e74-a76b-ca492487f937 (Updating crash deployment (+1 -> 2))
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:40:26.051+0000 7f83c8759640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: service_name: mon
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: placement:
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  hosts:
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  - compute-0
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  - compute-1
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  - compute-2
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:40:26.052+0000 7f83c8759640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: service_name: mgr
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: placement:
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  hosts:
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  - compute-0
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  - compute-1
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  - compute-2
Dec  7 04:40:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec  7 04:40:26 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  7 04:40:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: Deploying daemon crash.compute-1 on compute-1
Dec  7 04:40:27 np0005549474 ceph-mon[74516]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:40:27
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] No pools available
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:40:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:40:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:29 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 35d14aff-1000-4e74-a76b-ca492487f937 (Updating crash deployment (+1 -> 2))
Dec  7 04:40:29 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 35d14aff-1000-4e74-a76b-ca492487f937 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:40:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:40:29 np0005549474 podman[81161]: 2025-12-07 09:40:29.99118813 +0000 UTC m=+0.038366537 container create 01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_neumann, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:40:30 np0005549474 systemd[1]: Started libpod-conmon-01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6.scope.
Dec  7 04:40:30 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:30 np0005549474 podman[81161]: 2025-12-07 09:40:30.068029065 +0000 UTC m=+0.115207512 container init 01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_neumann, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:40:30 np0005549474 podman[81161]: 2025-12-07 09:40:29.973705817 +0000 UTC m=+0.020884244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:30 np0005549474 podman[81161]: 2025-12-07 09:40:30.075289377 +0000 UTC m=+0.122467774 container start 01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:40:30 np0005549474 podman[81161]: 2025-12-07 09:40:30.078731518 +0000 UTC m=+0.125909935 container attach 01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_neumann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:40:30 np0005549474 trusting_neumann[81177]: 167 167
Dec  7 04:40:30 np0005549474 systemd[1]: libpod-01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6.scope: Deactivated successfully.
Dec  7 04:40:30 np0005549474 podman[81161]: 2025-12-07 09:40:30.080641884 +0000 UTC m=+0.127820321 container died 01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:40:30 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f6a7d0227d558aea1ccb4f2a5cfc90d52309b9980b9539087d38c04dd6c252fb-merged.mount: Deactivated successfully.
Dec  7 04:40:30 np0005549474 podman[81161]: 2025-12-07 09:40:30.122559054 +0000 UTC m=+0.169737451 container remove 01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:40:30 np0005549474 systemd[1]: libpod-conmon-01f80858f4ebb14b771b49ffe888618cbf5e255fba9f66c017181f7cf658cba6.scope: Deactivated successfully.
Dec  7 04:40:30 np0005549474 podman[81203]: 2025-12-07 09:40:30.283365882 +0000 UTC m=+0.044394163 container create 5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:40:30 np0005549474 systemd[1]: Started libpod-conmon-5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4.scope.
Dec  7 04:40:30 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66231b00cc97bab0e69e400818a0a42f2d85546bcb64195751e1c5f5f2b064b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66231b00cc97bab0e69e400818a0a42f2d85546bcb64195751e1c5f5f2b064b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66231b00cc97bab0e69e400818a0a42f2d85546bcb64195751e1c5f5f2b064b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66231b00cc97bab0e69e400818a0a42f2d85546bcb64195751e1c5f5f2b064b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66231b00cc97bab0e69e400818a0a42f2d85546bcb64195751e1c5f5f2b064b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:30 np0005549474 podman[81203]: 2025-12-07 09:40:30.262736947 +0000 UTC m=+0.023765278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:30 np0005549474 podman[81203]: 2025-12-07 09:40:30.374678132 +0000 UTC m=+0.135706493 container init 5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 04:40:30 np0005549474 podman[81203]: 2025-12-07 09:40:30.382474631 +0000 UTC m=+0.143502932 container start 5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 04:40:30 np0005549474 podman[81203]: 2025-12-07 09:40:30.388068485 +0000 UTC m=+0.149096746 container attach 5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:40:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:40:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:40:30 np0005549474 quirky_leavitt[81220]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:40:30 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:30 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:30 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 32dc95f1-8dbf-4ad2-8ecd-93489439352c
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2468099184' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2468099184' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"}]': finished
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1839153269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1839153269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"}]': finished
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:31 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  7 04:40:31 np0005549474 lvm[81281]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:40:31 np0005549474 lvm[81281]: VG ceph_vg0 finished
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2468099184' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2468099184' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"}]': finished
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.101:0/1839153269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"}]: dispatch
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.101:0/1839153269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"}]': finished
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885774653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: stderr: got monmap epoch 1
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: --> Creating keyring file for osd.0
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  7 04:40:31 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 32dc95f1-8dbf-4ad2-8ecd-93489439352c --setuser ceph --setgroup ceph
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  7 04:40:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/796051432' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  7 04:40:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:32 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  7 04:40:32 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 2 completed events
Dec  7 04:40:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:40:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:33 np0005549474 ceph-mon[74516]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  7 04:40:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:34 np0005549474 quirky_leavitt[81220]: stderr: 2025-12-07T09:40:31.871+0000 7f145eeb1740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec  7 04:40:34 np0005549474 quirky_leavitt[81220]: stderr: 2025-12-07T09:40:32.137+0000 7f145eeb1740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  7 04:40:34 np0005549474 quirky_leavitt[81220]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  7 04:40:35 np0005549474 quirky_leavitt[81220]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  7 04:40:35 np0005549474 systemd[1]: libpod-5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4.scope: Deactivated successfully.
Dec  7 04:40:35 np0005549474 podman[81203]: 2025-12-07 09:40:35.391556891 +0000 UTC m=+5.152585162 container died 5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leavitt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:40:35 np0005549474 systemd[1]: libpod-5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4.scope: Consumed 2.031s CPU time.
Dec  7 04:40:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f66231b00cc97bab0e69e400818a0a42f2d85546bcb64195751e1c5f5f2b064b-merged.mount: Deactivated successfully.
Dec  7 04:40:35 np0005549474 podman[81203]: 2025-12-07 09:40:35.438875139 +0000 UTC m=+5.199903400 container remove 5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:40:35 np0005549474 systemd[1]: libpod-conmon-5f2937ee6e241edcb10e1c23f446086214485e3b1607938602e60e9b01b1dde4.scope: Deactivated successfully.
Dec  7 04:40:35 np0005549474 podman[82289]: 2025-12-07 09:40:35.936952434 +0000 UTC m=+0.039018186 container create 81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hodgkin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 04:40:35 np0005549474 systemd[1]: Started libpod-conmon-81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135.scope.
Dec  7 04:40:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:36 np0005549474 podman[82289]: 2025-12-07 09:40:35.921963473 +0000 UTC m=+0.024029255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:36 np0005549474 podman[82289]: 2025-12-07 09:40:36.022124962 +0000 UTC m=+0.124190744 container init 81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:40:36 np0005549474 podman[82289]: 2025-12-07 09:40:36.032539938 +0000 UTC m=+0.134605700 container start 81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 04:40:36 np0005549474 elastic_hodgkin[82305]: 167 167
Dec  7 04:40:36 np0005549474 systemd[1]: libpod-81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135.scope: Deactivated successfully.
Dec  7 04:40:36 np0005549474 podman[82289]: 2025-12-07 09:40:36.038711849 +0000 UTC m=+0.140777641 container attach 81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:40:36 np0005549474 podman[82289]: 2025-12-07 09:40:36.039194483 +0000 UTC m=+0.141260275 container died 81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:40:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7b118e671aec2edb970880b8fd68b50b0fb640237de87053053d3ccb0c0709b9-merged.mount: Deactivated successfully.
Dec  7 04:40:36 np0005549474 podman[82289]: 2025-12-07 09:40:36.093556048 +0000 UTC m=+0.195621810 container remove 81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:40:36 np0005549474 systemd[1]: libpod-conmon-81caf60ffed24b2a8ec6ce295ddce9bade2b46f27e301ca30a42c5571ba12135.scope: Deactivated successfully.
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.307853926 +0000 UTC m=+0.060668852 container create 8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:40:36 np0005549474 systemd[1]: Started libpod-conmon-8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae.scope.
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.280714809 +0000 UTC m=+0.033529715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8da336d3d82b8de0a663653dfb29f7d6beaa2da17d1eba381aebd218768bfa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8da336d3d82b8de0a663653dfb29f7d6beaa2da17d1eba381aebd218768bfa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8da336d3d82b8de0a663653dfb29f7d6beaa2da17d1eba381aebd218768bfa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8da336d3d82b8de0a663653dfb29f7d6beaa2da17d1eba381aebd218768bfa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.431002689 +0000 UTC m=+0.183817645 container init 8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.445302198 +0000 UTC m=+0.198117104 container start 8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.451080668 +0000 UTC m=+0.203895644 container attach 8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]: {
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:    "0": [
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:        {
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "devices": [
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "/dev/loop3"
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            ],
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "lv_name": "ceph_lv0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "lv_size": "21470642176",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "name": "ceph_lv0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "tags": {
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.cluster_name": "ceph",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.crush_device_class": "",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.encrypted": "0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.osd_id": "0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.type": "block",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.vdo": "0",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:                "ceph.with_tpm": "0"
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            },
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "type": "block",
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:            "vg_name": "ceph_vg0"
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:        }
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]:    ]
Dec  7 04:40:36 np0005549474 jolly_davinci[82344]: }
Dec  7 04:40:36 np0005549474 systemd[1]: libpod-8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae.scope: Deactivated successfully.
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.746020851 +0000 UTC m=+0.498835737 container died 8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:40:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e8da336d3d82b8de0a663653dfb29f7d6beaa2da17d1eba381aebd218768bfa9-merged.mount: Deactivated successfully.
Dec  7 04:40:36 np0005549474 podman[82328]: 2025-12-07 09:40:36.81240664 +0000 UTC m=+0.565221516 container remove 8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:40:36 np0005549474 systemd[1]: libpod-conmon-8c33ac26afabb5fa9ae3f33a84051f44e9f189d24c6a0a4795564217cdf45bae.scope: Deactivated successfully.
Dec  7 04:40:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  7 04:40:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  7 04:40:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:40:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:40:36 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  7 04:40:36 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  7 04:40:37 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  7 04:40:37 np0005549474 ceph-mon[74516]: Deploying daemon osd.0 on compute-0
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.364090516 +0000 UTC m=+0.038461499 container create c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kilby, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:40:37 np0005549474 systemd[1]: Started libpod-conmon-c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4.scope.
Dec  7 04:40:37 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.441892859 +0000 UTC m=+0.116263842 container init c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.347946373 +0000 UTC m=+0.022317436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.451362057 +0000 UTC m=+0.125733030 container start c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.454090297 +0000 UTC m=+0.128461270 container attach c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:37 np0005549474 friendly_kilby[82473]: 167 167
Dec  7 04:40:37 np0005549474 systemd[1]: libpod-c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4.scope: Deactivated successfully.
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.455972432 +0000 UTC m=+0.130343405 container died c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:37 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8af43bb1f1e4433f196d4a992993b58a1209056d35a06da6925faa0da749cbd5-merged.mount: Deactivated successfully.
Dec  7 04:40:37 np0005549474 podman[82458]: 2025-12-07 09:40:37.499332554 +0000 UTC m=+0.173703537 container remove c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:37 np0005549474 systemd[1]: libpod-conmon-c03f1365a6ed786f909aa2b3c28a98a4326a74cdbe7e8a7312229dcf99fe32a4.scope: Deactivated successfully.
Dec  7 04:40:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  7 04:40:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  7 04:40:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:40:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:40:37 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Dec  7 04:40:37 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Dec  7 04:40:37 np0005549474 podman[82502]: 2025-12-07 09:40:37.839378532 +0000 UTC m=+0.065837093 container create 4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:40:37 np0005549474 systemd[1]: Started libpod-conmon-4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9.scope.
Dec  7 04:40:37 np0005549474 podman[82502]: 2025-12-07 09:40:37.81378664 +0000 UTC m=+0.040245251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:37 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160425f42de8824ecbbd4b419a3603ab4df0fb9acd03314476833ab711eebc0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160425f42de8824ecbbd4b419a3603ab4df0fb9acd03314476833ab711eebc0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160425f42de8824ecbbd4b419a3603ab4df0fb9acd03314476833ab711eebc0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160425f42de8824ecbbd4b419a3603ab4df0fb9acd03314476833ab711eebc0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/160425f42de8824ecbbd4b419a3603ab4df0fb9acd03314476833ab711eebc0b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:37 np0005549474 podman[82502]: 2025-12-07 09:40:37.962993919 +0000 UTC m=+0.189452470 container init 4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:40:37 np0005549474 podman[82502]: 2025-12-07 09:40:37.976403583 +0000 UTC m=+0.202862154 container start 4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:40:37 np0005549474 podman[82502]: 2025-12-07 09:40:37.980804411 +0000 UTC m=+0.207262972 container attach 4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:40:38 np0005549474 python3[82546]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:40:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.106311404 +0000 UTC m=+0.047615638 container create b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2 (image=quay.io/ceph/ceph:v19, name=festive_williamson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec  7 04:40:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test[82543]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  7 04:40:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test[82543]:                            [--no-systemd] [--no-tmpfs]
Dec  7 04:40:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test[82543]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  7 04:40:38 np0005549474 systemd[1]: Started libpod-conmon-b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2.scope.
Dec  7 04:40:38 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  7 04:40:38 np0005549474 ceph-mon[74516]: Deploying daemon osd.1 on compute-1
Dec  7 04:40:38 np0005549474 systemd[1]: libpod-4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9.scope: Deactivated successfully.
Dec  7 04:40:38 np0005549474 podman[82502]: 2025-12-07 09:40:38.153162459 +0000 UTC m=+0.379621050 container died 4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:38 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea989ad6df761afdb0e9671b6beb32ebda97a26926e727c869e4e44631583629/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea989ad6df761afdb0e9671b6beb32ebda97a26926e727c869e4e44631583629/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea989ad6df761afdb0e9671b6beb32ebda97a26926e727c869e4e44631583629/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.087877652 +0000 UTC m=+0.029181936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.185733824 +0000 UTC m=+0.127038088 container init b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2 (image=quay.io/ceph/ceph:v19, name=festive_williamson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.192450091 +0000 UTC m=+0.133754325 container start b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2 (image=quay.io/ceph/ceph:v19, name=festive_williamson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:40:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-160425f42de8824ecbbd4b419a3603ab4df0fb9acd03314476833ab711eebc0b-merged.mount: Deactivated successfully.
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.195683946 +0000 UTC m=+0.136988200 container attach b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2 (image=quay.io/ceph/ceph:v19, name=festive_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 04:40:38 np0005549474 podman[82502]: 2025-12-07 09:40:38.210328086 +0000 UTC m=+0.436786627 container remove 4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:40:38 np0005549474 systemd[1]: libpod-conmon-4b03d6d583fcf2e76997f3ac2ade4aff95a75a592df1fd6978add102a26d25c9.scope: Deactivated successfully.
Dec  7 04:40:38 np0005549474 systemd[1]: Reloading.
Dec  7 04:40:38 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:40:38 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:40:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 04:40:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258034374' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 04:40:38 np0005549474 festive_williamson[82566]: 
Dec  7 04:40:38 np0005549474 festive_williamson[82566]: {"fsid":"75f4c9fd-539a-5e17-b55a-0a12a4e2736c","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1765100431,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-07T09:39:07:860179+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-07T09:40:30.054434+0000","services":{}},"progress_events":{}}
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.644427252 +0000 UTC m=+0.585731486 container died b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2 (image=quay.io/ceph/ceph:v19, name=festive_williamson, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 04:40:38 np0005549474 systemd[1]: libpod-b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2.scope: Deactivated successfully.
Dec  7 04:40:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ea989ad6df761afdb0e9671b6beb32ebda97a26926e727c869e4e44631583629-merged.mount: Deactivated successfully.
Dec  7 04:40:38 np0005549474 podman[82550]: 2025-12-07 09:40:38.815534233 +0000 UTC m=+0.756838467 container remove b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2 (image=quay.io/ceph/ceph:v19, name=festive_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:38 np0005549474 systemd[1]: libpod-conmon-b3cbff9d25b9ad16a7639bfa128f5d4032965954ff786641ac9ea0f0f75e8fb2.scope: Deactivated successfully.
Dec  7 04:40:38 np0005549474 systemd[1]: Reloading.
Dec  7 04:40:38 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:40:38 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:40:39 np0005549474 systemd[1]: Starting Ceph osd.0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:40:39 np0005549474 podman[82763]: 2025-12-07 09:40:39.31372016 +0000 UTC m=+0.040767668 container create 6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:40:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d9fdd48b0bc119bb8c945384fca440bc8585e04246e82cd76a9ede3cd956db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d9fdd48b0bc119bb8c945384fca440bc8585e04246e82cd76a9ede3cd956db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d9fdd48b0bc119bb8c945384fca440bc8585e04246e82cd76a9ede3cd956db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d9fdd48b0bc119bb8c945384fca440bc8585e04246e82cd76a9ede3cd956db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d9fdd48b0bc119bb8c945384fca440bc8585e04246e82cd76a9ede3cd956db/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:39 np0005549474 podman[82763]: 2025-12-07 09:40:39.293971651 +0000 UTC m=+0.021019179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:39 np0005549474 podman[82763]: 2025-12-07 09:40:39.391949935 +0000 UTC m=+0.118997463 container init 6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:39 np0005549474 podman[82763]: 2025-12-07 09:40:39.39994237 +0000 UTC m=+0.126989918 container start 6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:40:39 np0005549474 podman[82763]: 2025-12-07 09:40:39.404237086 +0000 UTC m=+0.131284624 container attach 6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:40:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:39 np0005549474 bash[82763]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:39 np0005549474 bash[82763]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:40 np0005549474 lvm[82861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:40:40 np0005549474 lvm[82861]: VG ceph_vg0 finished
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:40 np0005549474 bash[82763]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  7 04:40:40 np0005549474 bash[82763]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  7 04:40:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate[82779]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  7 04:40:40 np0005549474 bash[82763]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  7 04:40:40 np0005549474 systemd[1]: libpod-6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84.scope: Deactivated successfully.
Dec  7 04:40:40 np0005549474 podman[82763]: 2025-12-07 09:40:40.655606312 +0000 UTC m=+1.382653820 container died 6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:40:40 np0005549474 systemd[1]: libpod-6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84.scope: Consumed 1.354s CPU time.
Dec  7 04:40:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-36d9fdd48b0bc119bb8c945384fca440bc8585e04246e82cd76a9ede3cd956db-merged.mount: Deactivated successfully.
Dec  7 04:40:40 np0005549474 podman[82763]: 2025-12-07 09:40:40.706908127 +0000 UTC m=+1.433955645 container remove 6ccf7e9d7a469535cd206cea30a48619e66eb3019157ae76ce6e3a14c7914d84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:40:40 np0005549474 podman[83014]: 2025-12-07 09:40:40.87638338 +0000 UTC m=+0.038068378 container create 0a789d6072e346853b60ed453dc9f68017a569dd2eaa7e38858909a9737518d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:40:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b65d111c1e56862a596debde8d915cbfcaf0205ca026cebdd1aa769af1b4417/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b65d111c1e56862a596debde8d915cbfcaf0205ca026cebdd1aa769af1b4417/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b65d111c1e56862a596debde8d915cbfcaf0205ca026cebdd1aa769af1b4417/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b65d111c1e56862a596debde8d915cbfcaf0205ca026cebdd1aa769af1b4417/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b65d111c1e56862a596debde8d915cbfcaf0205ca026cebdd1aa769af1b4417/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:40 np0005549474 podman[83014]: 2025-12-07 09:40:40.943086917 +0000 UTC m=+0.104771935 container init 0a789d6072e346853b60ed453dc9f68017a569dd2eaa7e38858909a9737518d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:40:40 np0005549474 podman[83014]: 2025-12-07 09:40:40.949697801 +0000 UTC m=+0.111382789 container start 0a789d6072e346853b60ed453dc9f68017a569dd2eaa7e38858909a9737518d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:40:40 np0005549474 bash[83014]: 0a789d6072e346853b60ed453dc9f68017a569dd2eaa7e38858909a9737518d9
Dec  7 04:40:40 np0005549474 podman[83014]: 2025-12-07 09:40:40.861887705 +0000 UTC m=+0.023572733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:40 np0005549474 systemd[1]: Started Ceph osd.0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: pidfile_write: ignore empty --pid-file
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:40 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.560950745 +0000 UTC m=+0.063913986 container create 48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 systemd[1]: Started libpod-conmon-48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140.scope.
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.536281991 +0000 UTC m=+0.039245322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.664829663 +0000 UTC m=+0.167792954 container init 48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.672976142 +0000 UTC m=+0.175939373 container start 48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_edison, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.676043162 +0000 UTC m=+0.179006433 container attach 48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:40:41 np0005549474 recursing_edison[83157]: 167 167
Dec  7 04:40:41 np0005549474 systemd[1]: libpod-48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140.scope: Deactivated successfully.
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.679993749 +0000 UTC m=+0.182957000 container died 48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:40:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3c3f04339911264c3a5d068de6c78653183c874875332b8ac3269921e2fa6ccc-merged.mount: Deactivated successfully.
Dec  7 04:40:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:41 np0005549474 podman[83139]: 2025-12-07 09:40:41.717500359 +0000 UTC m=+0.220463610 container remove 48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:40:41 np0005549474 systemd[1]: libpod-conmon-48550cc849412007e527ba37de7b0f376505299420f7f2331403e579855cf140.scope: Deactivated successfully.
Dec  7 04:40:41 np0005549474 podman[83185]: 2025-12-07 09:40:41.88626079 +0000 UTC m=+0.040755186 container create f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 04:40:41 np0005549474 ceph-osd[83033]: bdev(0x55c4e3939800 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:41 np0005549474 systemd[1]: Started libpod-conmon-f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf.scope.
Dec  7 04:40:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a022ed923c3f8e7a802e48caa111f60141773fa78c6662a19ccf3afb20526bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a022ed923c3f8e7a802e48caa111f60141773fa78c6662a19ccf3afb20526bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a022ed923c3f8e7a802e48caa111f60141773fa78c6662a19ccf3afb20526bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a022ed923c3f8e7a802e48caa111f60141773fa78c6662a19ccf3afb20526bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:41 np0005549474 podman[83185]: 2025-12-07 09:40:41.869874579 +0000 UTC m=+0.024369005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:41 np0005549474 podman[83185]: 2025-12-07 09:40:41.976655312 +0000 UTC m=+0.131149758 container init f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_davinci, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:40:41 np0005549474 podman[83185]: 2025-12-07 09:40:41.985471941 +0000 UTC m=+0.139966347 container start f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_davinci, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:40:41 np0005549474 podman[83185]: 2025-12-07 09:40:41.989258382 +0000 UTC m=+0.143752788 container attach f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: load: jerasure load: lrc 
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:42 np0005549474 lvm[83284]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:40:42 np0005549474 lvm[83284]: VG ceph_vg0 finished
Dec  7 04:40:42 np0005549474 vibrant_davinci[83202]: {}
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:42 np0005549474 systemd[1]: libpod-f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf.scope: Deactivated successfully.
Dec  7 04:40:42 np0005549474 systemd[1]: libpod-f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf.scope: Consumed 1.147s CPU time.
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:42 np0005549474 podman[83296]: 2025-12-07 09:40:42.797718873 +0000 UTC m=+0.033316398 container died f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47dec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs mount shared_bdev_used = 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: RocksDB version: 7.9.2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Git sha 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: DB SUMMARY
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: DB Session ID:  QLUWBZB4YEL6Z6GKUVC4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: CURRENT file:  CURRENT
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                         Options.error_if_exists: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.create_if_missing: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                                     Options.env: 0x55c4e47afdc0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                                Options.info_log: 0x55c4e47b37a0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                              Options.statistics: (nil)
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.use_fsync: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                              Options.db_log_dir: 
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                                 Options.wal_dir: db.wal
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.write_buffer_manager: 0x55c4e48a8a00
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.unordered_write: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.row_cache: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                              Options.wal_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.two_write_queues: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.wal_compression: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.atomic_flush: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_background_jobs: 4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_background_compactions: -1
Dec  7 04:40:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a022ed923c3f8e7a802e48caa111f60141773fa78c6662a19ccf3afb20526bdc-merged.mount: Deactivated successfully.
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_subcompactions: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.max_open_files: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Compression algorithms supported:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kZSTD supported: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kXpressCompression supported: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kZlibCompression supported: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39ce9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39ce9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39ce9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fbf3f118-e4f8-4ab7-992c-b90e1055f01e
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100442837928, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100442838099, "job": 1, "event": "recovery_finished"}
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: freelist init
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: freelist _read_cfg
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bluefs umount
Dec  7 04:40:42 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) close
Dec  7 04:40:42 np0005549474 podman[83296]: 2025-12-07 09:40:42.843318181 +0000 UTC m=+0.078915696 container remove f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_davinci, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 04:40:42 np0005549474 systemd[1]: libpod-conmon-f621ab1cb0531095f002c93aeb2f4ea866f9942866e5455df673c4ca06b25abf.scope: Deactivated successfully.
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:40:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bdev(0x55c4e47df000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluefs mount shared_bdev_used = 4718592
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: RocksDB version: 7.9.2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Git sha 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: DB SUMMARY
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: DB Session ID:  QLUWBZB4YEL6Z6GKUVC5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: CURRENT file:  CURRENT
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                         Options.error_if_exists: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.create_if_missing: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                                     Options.env: 0x55c4e494c310
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                                Options.info_log: 0x55c4e47b3920
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                              Options.statistics: (nil)
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.use_fsync: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                              Options.db_log_dir: 
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                                 Options.wal_dir: db.wal
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.write_buffer_manager: 0x55c4e48a8a00
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.unordered_write: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.row_cache: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                              Options.wal_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.two_write_queues: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.wal_compression: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.atomic_flush: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_background_jobs: 4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_background_compactions: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_subcompactions: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.max_open_files: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Compression algorithms supported:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kZSTD supported: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kXpressCompression supported: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kZlibCompression supported: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39cf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39ce9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39ce9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:           Options.merge_operator: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e47b3ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c4e39ce9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.compression: LZ4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.num_levels: 7
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.bloom_locality: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                               Options.ttl: 2592000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                       Options.enable_blob_files: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                           Options.min_blob_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fbf3f118-e4f8-4ab7-992c-b90e1055f01e
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443124454, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443127730, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100443, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fbf3f118-e4f8-4ab7-992c-b90e1055f01e", "db_session_id": "QLUWBZB4YEL6Z6GKUVC5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443130323, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100443, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fbf3f118-e4f8-4ab7-992c-b90e1055f01e", "db_session_id": "QLUWBZB4YEL6Z6GKUVC5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443133291, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100443, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fbf3f118-e4f8-4ab7-992c-b90e1055f01e", "db_session_id": "QLUWBZB4YEL6Z6GKUVC5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443134863, "job": 1, "event": "recovery_finished"}
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c4e49b0000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: DB pointer 0x55c4e495a000
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 460.80 MB usag
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: _get_class not permitted to load lua
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: _get_class not permitted to load sdk
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 load_pgs
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 load_pgs opened 0 pgs
Dec  7 04:40:43 np0005549474 ceph-osd[83033]: osd.0 0 log_to_monitors true
Dec  7 04:40:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0[83029]: 2025-12-07T09:40:43.159+0000 7f1e213c6740 -1 osd.0 0 log_to_monitors true
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:40:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec  7 04:40:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:44 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:44 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  7 04:40:44 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  7 04:40:44 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  7 04:40:44 np0005549474 podman[83866]: 2025-12-07 09:40:44.23590205 +0000 UTC m=+0.074949810 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 podman[83866]: 2025-12-07 09:40:44.335858673 +0000 UTC m=+0.174906403 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:40:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0 done with init, starting boot process
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0 start_boot
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  7 04:40:45 np0005549474 ceph-osd[83033]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:45 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:45 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:45 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:45 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:40:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:45 np0005549474 podman[84117]: 2025-12-07 09:40:45.876516187 +0000 UTC m=+0.055568251 container create f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 04:40:45 np0005549474 podman[84117]: 2025-12-07 09:40:45.841788108 +0000 UTC m=+0.020840192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:45 np0005549474 systemd[1]: Started libpod-conmon-f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2.scope.
Dec  7 04:40:45 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:45 np0005549474 podman[84117]: 2025-12-07 09:40:45.994774237 +0000 UTC m=+0.173826321 container init f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:40:46 np0005549474 podman[84117]: 2025-12-07 09:40:46.001171424 +0000 UTC m=+0.180223498 container start f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:40:46 np0005549474 dreamy_hypatia[84134]: 167 167
Dec  7 04:40:46 np0005549474 systemd[1]: libpod-f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2.scope: Deactivated successfully.
Dec  7 04:40:46 np0005549474 podman[84117]: 2025-12-07 09:40:46.036859502 +0000 UTC m=+0.215911566 container attach f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:40:46 np0005549474 podman[84117]: 2025-12-07 09:40:46.037420978 +0000 UTC m=+0.216473042 container died f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hypatia, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Dec  7 04:40:46 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9c5340e4de09b1006c5e0e9073bcb137cdb24a3a382ff21bd473541a4d9a163c-merged.mount: Deactivated successfully.
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:46 np0005549474 podman[84117]: 2025-12-07 09:40:46.320910646 +0000 UTC m=+0.499962710 container remove f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 04:40:46 np0005549474 systemd[1]: libpod-conmon-f1ab1021130864f76ef6e283535f557394ec0ee0dac5adf2357f6b1b5c7d8db2.scope: Deactivated successfully.
Dec  7 04:40:46 np0005549474 podman[84160]: 2025-12-07 09:40:46.500738962 +0000 UTC m=+0.044592339 container create 9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:46 np0005549474 podman[84160]: 2025-12-07 09:40:46.482043854 +0000 UTC m=+0.025897261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:40:46 np0005549474 systemd[1]: Started libpod-conmon-9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173.scope.
Dec  7 04:40:46 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:40:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c83cb6b3ecd4467c5be73d0c58514ae6c203dad77ab668904d63d2ed09100fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c83cb6b3ecd4467c5be73d0c58514ae6c203dad77ab668904d63d2ed09100fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c83cb6b3ecd4467c5be73d0c58514ae6c203dad77ab668904d63d2ed09100fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c83cb6b3ecd4467c5be73d0c58514ae6c203dad77ab668904d63d2ed09100fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:46 np0005549474 podman[84160]: 2025-12-07 09:40:46.648361383 +0000 UTC m=+0.192214810 container init 9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:40:46 np0005549474 podman[84160]: 2025-12-07 09:40:46.655947386 +0000 UTC m=+0.199800783 container start 9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:46 np0005549474 podman[84160]: 2025-12-07 09:40:46.713915167 +0000 UTC m=+0.257768564 container attach 9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec  7 04:40:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:40:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-1 to  5247M
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]: [
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:    {
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "available": false,
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "being_replaced": false,
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "ceph_device_lvm": false,
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "lsm_data": {},
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "lvs": [],
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "path": "/dev/sr0",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "rejected_reasons": [
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "Insufficient space (<5GB)",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "Has a FileSystem"
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        ],
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        "sys_api": {
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "actuators": null,
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "device_nodes": [
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:                "sr0"
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            ],
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "devname": "sr0",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "human_readable_size": "482.00 KB",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "id_bus": "ata",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "model": "QEMU DVD-ROM",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "nr_requests": "2",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "parent": "/dev/sr0",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "partitions": {},
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "path": "/dev/sr0",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "removable": "1",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "rev": "2.5+",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "ro": "0",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "rotational": "1",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "sas_address": "",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "sas_device_handle": "",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "scheduler_mode": "mq-deadline",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "sectors": 0,
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "sectorsize": "2048",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "size": 493568.0,
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "support_discard": "2048",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "type": "disk",
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:            "vendor": "QEMU"
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:        }
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]:    }
Dec  7 04:40:47 np0005549474 busy_rhodes[84178]: ]
Dec  7 04:40:47 np0005549474 systemd[1]: libpod-9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173.scope: Deactivated successfully.
Dec  7 04:40:47 np0005549474 podman[84160]: 2025-12-07 09:40:47.365729021 +0000 UTC m=+0.909582398 container died 9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:40:47 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7c83cb6b3ecd4467c5be73d0c58514ae6c203dad77ab668904d63d2ed09100fa-merged.mount: Deactivated successfully.
Dec  7 04:40:47 np0005549474 podman[84160]: 2025-12-07 09:40:47.606679871 +0000 UTC m=+1.150533248 container remove 9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 04:40:47 np0005549474 systemd[1]: libpod-conmon-9b9313f66b7a91bf3a0114c3770926e9176a3f87c9877aac9f453fd218a29173.scope: Deactivated successfully.
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:40:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:40:47 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:40:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:48 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:48 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:48 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:48 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:40:48 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:40:49 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:49 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:49 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:49 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:50 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:50 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:50 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:50 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:51 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:51 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:51 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:51 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:52 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:52 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:52 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:52 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:53 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:53 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:53 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:53 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:54 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:54 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:54 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:54 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:55 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:55 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:55 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:55 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 04:40:56 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/194844255; not ready for session (expect reconnect)
Dec  7 04:40:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:56 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.804 iops: 717.937 elapsed_sec: 4.179
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [WRN] : OSD bench result of 717.937482 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 0 waiting for initial osdmap
Dec  7 04:40:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0[83029]: 2025-12-07T09:40:56.099+0000 7f1e1d349640 -1 osd.0 0 waiting for initial osdmap
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Dec  7 04:40:56 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:56 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 set_numa_affinity not setting numa affinity
Dec  7 04:40:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-0[83029]: 2025-12-07T09:40:56.226+0000 7f1e18971640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  7 04:40:56 np0005549474 ceph-osd[83033]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  7 04:40:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255] boot
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:57 np0005549474 ceph-osd[83033]: osd.0 9 state: booting -> active
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: OSD bench result of 717.937482 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] creating mgr pool
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  7 04:40:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:40:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:40:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255] boot
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Dec  7 04:40:58 np0005549474 ceph-osd[83033]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  7 04:40:58 np0005549474 ceph-osd[83033]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  7 04:40:58 np0005549474 ceph-osd[83033]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:58 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  7 04:40:58 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:58 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:59 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:40:59 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:40:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:40:59 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 04:41:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336] boot
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:41:00 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1470232336; not ready for session (expect reconnect)
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  7 04:41:00 np0005549474 ceph-mon[74516]: OSD bench result of 2499.858795 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336] boot
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec  7 04:41:01 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] creating main.db for devicehealth
Dec  7 04:41:01 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:41:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 04:41:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  7 04:41:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec  7 04:41:02 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec  7 04:41:02 np0005549474 ceph-mon[74516]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  7 04:41:02 np0005549474 ceph-mon[74516]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  7 04:41:02 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dotugk(active, since 94s)
Dec  7 04:41:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:41:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:41:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:41:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:41:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 04:41:06 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:41:06 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:41:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:41:07 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:41:07 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:41:07 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:41:07 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:41:07 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:41:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 04:41:08 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 0416e82e-2fdd-482f-9f9e-09e279883553 (Updating mon deployment (+2 -> 3))
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:08 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec  7 04:41:08 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec  7 04:41:09 np0005549474 python3[85398]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:09 np0005549474 podman[85400]: 2025-12-07 09:41:09.17458302 +0000 UTC m=+0.044858837 container create 8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17 (image=quay.io/ceph/ceph:v19, name=gracious_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 04:41:09 np0005549474 systemd[1]: Started libpod-conmon-8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17.scope.
Dec  7 04:41:09 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b07350f48b419bf0435fb382777e9e5b741d4114decf79539b8b5019dc8bb76/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b07350f48b419bf0435fb382777e9e5b741d4114decf79539b8b5019dc8bb76/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b07350f48b419bf0435fb382777e9e5b741d4114decf79539b8b5019dc8bb76/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:09 np0005549474 podman[85400]: 2025-12-07 09:41:09.152656786 +0000 UTC m=+0.022932653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:09 np0005549474 podman[85400]: 2025-12-07 09:41:09.493352484 +0000 UTC m=+0.363628351 container init 8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17 (image=quay.io/ceph/ceph:v19, name=gracious_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:41:09 np0005549474 podman[85400]: 2025-12-07 09:41:09.499952557 +0000 UTC m=+0.370228374 container start 8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17 (image=quay.io/ceph/ceph:v19, name=gracious_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  7 04:41:09 np0005549474 podman[85400]: 2025-12-07 09:41:09.509108966 +0000 UTC m=+0.379384833 container attach 8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17 (image=quay.io/ceph/ceph:v19, name=gracious_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: Deploying daemon mon.compute-2 on compute-2
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9648655' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 04:41:09 np0005549474 gracious_booth[85416]: 
Dec  7 04:41:09 np0005549474 gracious_booth[85416]: {"fsid":"75f4c9fd-539a-5e17-b55a-0a12a4e2736c","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1765100460,"num_in_osds":2,"osd_in_since":1765100431,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894681088,"bytes_avail":42046603264,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-07T09:39:07:860179+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-07T09:40:30.054434+0000","services":{}},"progress_events":{}}
Dec  7 04:41:09 np0005549474 systemd[1]: libpod-8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17.scope: Deactivated successfully.
Dec  7 04:41:09 np0005549474 podman[85400]: 2025-12-07 09:41:09.954270747 +0000 UTC m=+0.824546584 container died 8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17 (image=quay.io/ceph/ceph:v19, name=gracious_booth, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  7 04:41:09 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  7 04:41:10 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4b07350f48b419bf0435fb382777e9e5b741d4114decf79539b8b5019dc8bb76-merged.mount: Deactivated successfully.
Dec  7 04:41:10 np0005549474 podman[85400]: 2025-12-07 09:41:10.237456286 +0000 UTC m=+1.107732103 container remove 8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17 (image=quay.io/ceph/ceph:v19, name=gracious_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:10 np0005549474 systemd[1]: libpod-conmon-8f852319ca4f0f07594b07ecace72e4a5c8f7c8f281190b5f1550d5b3ebdeb17.scope: Deactivated successfully.
Dec  7 04:41:10 np0005549474 python3[85479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:10 np0005549474 podman[85480]: 2025-12-07 09:41:10.775274186 +0000 UTC m=+0.042056875 container create 80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3 (image=quay.io/ceph/ceph:v19, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:41:10 np0005549474 systemd[1]: Started libpod-conmon-80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3.scope.
Dec  7 04:41:10 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa2f6a92699b7998e84220d6dfbe49b1bc38a3228e661799aebefbca147cdaf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa2f6a92699b7998e84220d6dfbe49b1bc38a3228e661799aebefbca147cdaf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:10 np0005549474 podman[85480]: 2025-12-07 09:41:10.843664812 +0000 UTC m=+0.110447521 container init 80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3 (image=quay.io/ceph/ceph:v19, name=cranky_mccarthy, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:41:10 np0005549474 podman[85480]: 2025-12-07 09:41:10.849875994 +0000 UTC m=+0.116658683 container start 80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3 (image=quay.io/ceph/ceph:v19, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:41:10 np0005549474 podman[85480]: 2025-12-07 09:41:10.753733803 +0000 UTC m=+0.020516512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:10 np0005549474 podman[85480]: 2025-12-07 09:41:10.85381309 +0000 UTC m=+0.120595779 container attach 80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3 (image=quay.io/ceph/ceph:v19, name=cranky_mccarthy, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:10 np0005549474 ceph-mon[74516]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  7 04:41:10 np0005549474 ceph-mon[74516]: Cluster is now healthy
Dec  7 04:41:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2220788821' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2220788821' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec  7 04:41:11 np0005549474 cranky_mccarthy[85495]: pool 'vms' created
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2220788821' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec  7 04:41:11 np0005549474 systemd[1]: libpod-80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3.scope: Deactivated successfully.
Dec  7 04:41:11 np0005549474 podman[85480]: 2025-12-07 09:41:11.891801306 +0000 UTC m=+1.158583995 container died 80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3 (image=quay.io/ceph/ceph:v19, name=cranky_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:41:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9fa2f6a92699b7998e84220d6dfbe49b1bc38a3228e661799aebefbca147cdaf-merged.mount: Deactivated successfully.
Dec  7 04:41:11 np0005549474 podman[85480]: 2025-12-07 09:41:11.926522365 +0000 UTC m=+1.193305054 container remove 80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3 (image=quay.io/ceph/ceph:v19, name=cranky_mccarthy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:41:11 np0005549474 systemd[1]: libpod-conmon-80567308745db90e0e063fc461dd5aa345a128c44e3a6f713caaf6953eda7de3.scope: Deactivated successfully.
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:12 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec  7 04:41:12 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec  7 04:41:12 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/695810029; not ready for session (expect reconnect)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:12 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:12 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 04:41:12 np0005549474 python3[85559]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:12 np0005549474 podman[85560]: 2025-12-07 09:41:12.257424093 +0000 UTC m=+0.040016735 container create b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7 (image=quay.io/ceph/ceph:v19, name=sleepy_swartz, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:12 np0005549474 systemd[1]: Started libpod-conmon-b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7.scope.
Dec  7 04:41:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ed61fb67e100962e0f4bdcef233db85a46954ca9dc65349f077b0a2338f0f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ed61fb67e100962e0f4bdcef233db85a46954ca9dc65349f077b0a2338f0f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:12 np0005549474 podman[85560]: 2025-12-07 09:41:12.332094354 +0000 UTC m=+0.114687016 container init b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7 (image=quay.io/ceph/ceph:v19, name=sleepy_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:12 np0005549474 podman[85560]: 2025-12-07 09:41:12.240583799 +0000 UTC m=+0.023176461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:12 np0005549474 podman[85560]: 2025-12-07 09:41:12.337713268 +0000 UTC m=+0.120305910 container start b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7 (image=quay.io/ceph/ceph:v19, name=sleepy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:12 np0005549474 podman[85560]: 2025-12-07 09:41:12.341624114 +0000 UTC m=+0.124216756 container attach b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7 (image=quay.io/ceph/ceph:v19, name=sleepy_swartz, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v61: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:13 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/695810029; not ready for session (expect reconnect)
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:13 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 04:41:13 np0005549474 ceph-mgr[74811]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 04:41:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 04:41:14 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:14 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 04:41:14 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/695810029; not ready for session (expect reconnect)
Dec  7 04:41:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:14 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 04:41:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:15 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:15 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 04:41:15 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/695810029; not ready for session (expect reconnect)
Dec  7 04:41:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:15 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 04:41:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 04:41:16 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:16 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:16 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/695810029; not ready for session (expect reconnect)
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:16 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 04:41:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 04:41:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/695810029; not ready for session (expect reconnect)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T09:41:12.124181+0000
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : created 2025-12-07T09:39:05.386379+0000
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dotugk(active, since 109s)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 0416e82e-2fdd-482f-9f9e-09e279883553 (Updating mon deployment (+2 -> 3))
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 0416e82e-2fdd-482f-9f9e-09e279883553 (Updating mon deployment (+2 -> 3)) in 9 seconds
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 9e02d5ca-5497-48c6-9b62-f51b67f1fe5a (Updating mgr deployment (+2 -> 3))
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.ntknug on compute-2
Dec  7 04:41:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.ntknug on compute-2
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0 calling monitor election
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-2 calling monitor election
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: overall HEALTH_OK
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:41:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec  7 04:41:18 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:18 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:41:18 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:41:18 np0005549474 ceph-mgr[74811]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec  7 04:41:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:41:18.126+0000 7f83d6f76640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec  7 04:41:18 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 3 completed events
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v65: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:19 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:19 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:20 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:20 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v66: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:21 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:21 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:22 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:22 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v67: 2 pgs: 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:23 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:23 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:23 np0005549474 ceph-mon[74516]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec  7 04:41:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:41:23 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  7 04:41:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_auth_request failed to assign global_id
Dec  7 04:41:24 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:24 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T09:41:18.042048+0000
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : created 2025-12-07T09:39:05.386379+0000
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dotugk(active, since 116s)
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/490793873' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  7 04:41:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v68: 2 pgs: 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:25 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:25 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:26 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:26 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 04:41:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:26 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 4fe96d95-d5cc-4f18-85f0-a3ffc69dc21d (Global Recovery Event) in 13 seconds
Dec  7 04:41:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v69: 2 pgs: 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/356058792; not ready for session (expect reconnect)
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/490793873' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec  7 04:41:27 np0005549474 sleepy_swartz[85575]: pool 'volumes' created
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: Deploying daemon mgr.compute-2.ntknug on compute-2
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: mon.compute-0 calling monitor election
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: mon.compute-2 calling monitor election
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Dec  7 04:41:27 np0005549474 ceph-mon[74516]:    application not enabled on pool 'vms'
Dec  7 04:41:27 np0005549474 ceph-mon[74516]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/490793873' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec  7 04:41:27 np0005549474 systemd[1]: libpod-b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7.scope: Deactivated successfully.
Dec  7 04:41:27 np0005549474 podman[85560]: 2025-12-07 09:41:27.345644263 +0000 UTC m=+15.128236905 container died b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7 (image=quay.io/ceph/ceph:v19, name=sleepy_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:41:27
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [balancer INFO root] Some PGs (0.333333) are unknown; try again later
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:41:27 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:41:28 np0005549474 ceph-mgr[74811]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec  7 04:41:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:41:28.047+0000 7f83d6f76640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:41:28 np0005549474 systemd[1]: var-lib-containers-storage-overlay-31ed61fb67e100962e0f4bdcef233db85a46954ca9dc65349f077b0a2338f0f2-merged.mount: Deactivated successfully.
Dec  7 04:41:28 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 04:41:28 np0005549474 podman[85560]: 2025-12-07 09:41:28.106633871 +0000 UTC m=+15.889226533 container remove b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7 (image=quay.io/ceph/ceph:v19, name=sleepy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:41:28 np0005549474 systemd[1]: libpod-conmon-b093e16315127ff6c4981cfbe61b5e1eb26cb3ea2fd0a17a80125f307560dec7.scope: Deactivated successfully.
Dec  7 04:41:28 np0005549474 python3[85641]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:28 np0005549474 podman[85642]: 2025-12-07 09:41:28.455636811 +0000 UTC m=+0.029733704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 04:41:28 np0005549474 podman[85642]: 2025-12-07 09:41:28.713757374 +0000 UTC m=+0.287854267 container create 877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3 (image=quay.io/ceph/ceph:v19, name=pedantic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:28 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.buauyv on compute-1
Dec  7 04:41:28 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.buauyv on compute-1
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-1 calling monitor election
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/490793873' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec  7 04:41:28 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 69fb25db-c09d-46c0-b57e-eaff20d4c8e2 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:41:28 np0005549474 systemd[1]: Started libpod-conmon-877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3.scope.
Dec  7 04:41:28 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:28 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:28 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d88463bb5036ebfb809f46d4a8ad5821058d6c41c32dd5914420a15e8ddef22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:28 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d88463bb5036ebfb809f46d4a8ad5821058d6c41c32dd5914420a15e8ddef22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:28 np0005549474 podman[85642]: 2025-12-07 09:41:28.789104845 +0000 UTC m=+0.363201708 container init 877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3 (image=quay.io/ceph/ceph:v19, name=pedantic_shirley, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 04:41:28 np0005549474 podman[85642]: 2025-12-07 09:41:28.794293608 +0000 UTC m=+0.368390461 container start 877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3 (image=quay.io/ceph/ceph:v19, name=pedantic_shirley, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:41:28 np0005549474 podman[85642]: 2025-12-07 09:41:28.798408698 +0000 UTC m=+0.372505561 container attach 877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3 (image=quay.io/ceph/ceph:v19, name=pedantic_shirley, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:41:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v72: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:41:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/831807626' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/831807626' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec  7 04:41:29 np0005549474 pedantic_shirley[85657]: pool 'backups' created
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec  7 04:41:29 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:29 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev e7b30203-0585-4054-b301-a69c122d266b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  7 04:41:29 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 69fb25db-c09d-46c0-b57e-eaff20d4c8e2 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  7 04:41:29 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 69fb25db-c09d-46c0-b57e-eaff20d4c8e2 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 1 seconds
Dec  7 04:41:29 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev e7b30203-0585-4054-b301-a69c122d266b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  7 04:41:29 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event e7b30203-0585-4054-b301-a69c122d266b (PG autoscaler increasing pool 3 PGs from 1 to 32) in 0 seconds
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: Deploying daemon mgr.compute-1.buauyv on compute-1
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:29 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/831807626' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:29 np0005549474 systemd[1]: libpod-877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3.scope: Deactivated successfully.
Dec  7 04:41:29 np0005549474 podman[85642]: 2025-12-07 09:41:29.770901171 +0000 UTC m=+1.344998074 container died 877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3 (image=quay.io/ceph/ceph:v19, name=pedantic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:41:29 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8d88463bb5036ebfb809f46d4a8ad5821058d6c41c32dd5914420a15e8ddef22-merged.mount: Deactivated successfully.
Dec  7 04:41:29 np0005549474 podman[85642]: 2025-12-07 09:41:29.819460687 +0000 UTC m=+1.393557540 container remove 877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3 (image=quay.io/ceph/ceph:v19, name=pedantic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:41:29 np0005549474 systemd[1]: libpod-conmon-877ca3c192db9fe32378c3b85dc4dc7a58516187d380ee61653ca41df4c6d6d3.scope: Deactivated successfully.
Dec  7 04:41:30 np0005549474 python3[85721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.133307354 +0000 UTC m=+0.036560902 container create 9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb (image=quay.io/ceph/ceph:v19, name=tender_bose, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:41:30 np0005549474 systemd[1]: Started libpod-conmon-9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb.scope.
Dec  7 04:41:30 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321642d61b08a696d3af8adafbc9a222f35ab13502d92555a1a4bf999d67c977/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:30 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321642d61b08a696d3af8adafbc9a222f35ab13502d92555a1a4bf999d67c977/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.199771715 +0000 UTC m=+0.103025303 container init 9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb (image=quay.io/ceph/ceph:v19, name=tender_bose, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.204919531 +0000 UTC m=+0.108173079 container start 9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb (image=quay.io/ceph/ceph:v19, name=tender_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.208449759 +0000 UTC m=+0.111703317 container attach 9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb (image=quay.io/ceph/ceph:v19, name=tender_bose, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.116491125 +0000 UTC m=+0.019744703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 9e02d5ca-5497-48c6-9b62-f51b67f1fe5a (Updating mgr deployment (+2 -> 3))
Dec  7 04:41:30 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 9e02d5ca-5497-48c6-9b62-f51b67f1fe5a (Updating mgr deployment (+2 -> 3)) in 13 seconds
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 565b21cc-b8a6-4472-9a0d-5ca2755c08f8 (Updating crash deployment (+1 -> 3))
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:30 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec  7 04:41:30 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2061751046' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2061751046' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec  7 04:41:30 np0005549474 tender_bose[85737]: pool 'images' created
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec  7 04:41:30 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/831807626' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2061751046' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:30 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2061751046' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:30 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:30 np0005549474 systemd[1]: libpod-9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb.scope: Deactivated successfully.
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.767289766 +0000 UTC m=+0.670543354 container died 9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb (image=quay.io/ceph/ceph:v19, name=tender_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:41:30 np0005549474 systemd[1]: var-lib-containers-storage-overlay-321642d61b08a696d3af8adafbc9a222f35ab13502d92555a1a4bf999d67c977-merged.mount: Deactivated successfully.
Dec  7 04:41:30 np0005549474 podman[85722]: 2025-12-07 09:41:30.801996756 +0000 UTC m=+0.705250314 container remove 9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb (image=quay.io/ceph/ceph:v19, name=tender_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:41:30 np0005549474 systemd[1]: libpod-conmon-9b4447ca6aa3743c131e383a4ccd861a516bf3c6061bd4bfbae2162e16d3cbcb.scope: Deactivated successfully.
Dec  7 04:41:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v75: 36 pgs: 34 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:31 np0005549474 python3[85802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.116109206 +0000 UTC m=+0.035418573 container create 94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4 (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:31 np0005549474 systemd[1]: Started libpod-conmon-94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4.scope.
Dec  7 04:41:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349e11a468954bc50de33689118cca786775ff537a25a20e7ab77638c4d84e8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349e11a468954bc50de33689118cca786775ff537a25a20e7ab77638c4d84e8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.099952757 +0000 UTC m=+0.019262144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.19984753 +0000 UTC m=+0.119156957 container init 94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4 (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.205489881 +0000 UTC m=+0.124799268 container start 94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4 (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:41:31 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 7 completed events
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.208988238 +0000 UTC m=+0.128297625 container attach 94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4 (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:31 np0005549474 ceph-mgr[74811]: [progress WARNING root] Starting Global Recovery Event,34 pgs not in active + clean state
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug started
Dec  7 04:41:31 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mgr.compute-2.ntknug 192.168.122.102:0/4006767322; not ready for session (expect reconnect)
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239247002' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239247002' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec  7 04:41:31 np0005549474 goofy_grothendieck[85819]: pool 'cephfs.cephfs.meta' created
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec  7 04:41:31 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:31 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: Deploying daemon crash.compute-2 on compute-2
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:31 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1239247002' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:31 np0005549474 systemd[1]: libpod-94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4.scope: Deactivated successfully.
Dec  7 04:41:31 np0005549474 conmon[85819]: conmon 94a752c034f9dce54216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4.scope/container/memory.events
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.786715437 +0000 UTC m=+0.706024804 container died 94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4 (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:41:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-349e11a468954bc50de33689118cca786775ff537a25a20e7ab77638c4d84e8b-merged.mount: Deactivated successfully.
Dec  7 04:41:31 np0005549474 podman[85803]: 2025-12-07 09:41:31.816016414 +0000 UTC m=+0.735325781 container remove 94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4 (image=quay.io/ceph/ceph:v19, name=goofy_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:31 np0005549474 systemd[1]: libpod-conmon-94a752c034f9dce54216dae199bdd938f27f027191708fc2ba021b6c5574aba4.scope: Deactivated successfully.
Dec  7 04:41:32 np0005549474 python3[85883]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:32 np0005549474 podman[85884]: 2025-12-07 09:41:32.113787749 +0000 UTC m=+0.043487718 container create 26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8 (image=quay.io/ceph/ceph:v19, name=dazzling_babbage, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 04:41:32 np0005549474 systemd[1]: Started libpod-conmon-26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8.scope.
Dec  7 04:41:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d9068483b880405f618a6ff18d383e984fe85e4a3d97fe82239c4abc23991f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d9068483b880405f618a6ff18d383e984fe85e4a3d97fe82239c4abc23991f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:32 np0005549474 podman[85884]: 2025-12-07 09:41:32.182729295 +0000 UTC m=+0.112429294 container init 26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8 (image=quay.io/ceph/ceph:v19, name=dazzling_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:41:32 np0005549474 podman[85884]: 2025-12-07 09:41:32.187894002 +0000 UTC m=+0.117593961 container start 26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8 (image=quay.io/ceph/ceph:v19, name=dazzling_babbage, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:41:32 np0005549474 podman[85884]: 2025-12-07 09:41:32.094567557 +0000 UTC m=+0.024267566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:32 np0005549474 podman[85884]: 2025-12-07 09:41:32.194081699 +0000 UTC m=+0.123781678 container attach 26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8 (image=quay.io/ceph/ceph:v19, name=dazzling_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:32 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 565b21cc-b8a6-4472-9a0d-5ca2755c08f8 (Updating crash deployment (+1 -> 3))
Dec  7 04:41:32 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 565b21cc-b8a6-4472-9a0d-5ca2755c08f8 (Updating crash deployment (+1 -> 3)) in 2 seconds
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3140489880' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  7 04:41:32 np0005549474 podman[86018]: 2025-12-07 09:41:32.891952325 +0000 UTC m=+0.041687552 container create c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:41:32 np0005549474 systemd[1]: Started libpod-conmon-c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5.scope.
Dec  7 04:41:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:32 np0005549474 podman[86018]: 2025-12-07 09:41:32.965467252 +0000 UTC m=+0.115202499 container init c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_khayyam, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:41:32 np0005549474 podman[86018]: 2025-12-07 09:41:32.970140132 +0000 UTC m=+0.119875359 container start c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 04:41:32 np0005549474 podman[86018]: 2025-12-07 09:41:32.876126547 +0000 UTC m=+0.025861804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:41:32 np0005549474 podman[86018]: 2025-12-07 09:41:32.972972988 +0000 UTC m=+0.122708225 container attach c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:32 np0005549474 competent_khayyam[86035]: 167 167
Dec  7 04:41:32 np0005549474 systemd[1]: libpod-c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5.scope: Deactivated successfully.
Dec  7 04:41:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v77: 37 pgs: 1 creating+peering, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:32 np0005549474 conmon[86035]: conmon c4cf2a85e445442a9921 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5.scope/container/memory.events
Dec  7 04:41:32 np0005549474 podman[86018]: 2025-12-07 09:41:32.975092262 +0000 UTC m=+0.124827489 container died c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:41:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1e7b2da49e18b1f2cdcd100f767f3c79ca9b46609541acace7f20824024157a1-merged.mount: Deactivated successfully.
Dec  7 04:41:33 np0005549474 podman[86018]: 2025-12-07 09:41:33.01032373 +0000 UTC m=+0.160058957 container remove c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:33 np0005549474 systemd[1]: libpod-conmon-c4cf2a85e445442a9921180f2c1e1eccfab353348db42a7826d9835f9a4472f5.scope: Deactivated successfully.
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.163089704 +0000 UTC m=+0.034802005 container create bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_goldstine, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3140489880' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec  7 04:41:33 np0005549474 dazzling_babbage[85900]: pool 'cephfs.cephfs.data' created
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec  7 04:41:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:33 np0005549474 podman[85884]: 2025-12-07 09:41:33.196737902 +0000 UTC m=+1.126437871 container died 26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8 (image=quay.io/ceph/ceph:v19, name=dazzling_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:41:33 np0005549474 systemd[1]: Started libpod-conmon-bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d.scope.
Dec  7 04:41:33 np0005549474 systemd[1]: libpod-26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8.scope: Deactivated successfully.
Dec  7 04:41:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-02d9068483b880405f618a6ff18d383e984fe85e4a3d97fe82239c4abc23991f-merged.mount: Deactivated successfully.
Dec  7 04:41:33 np0005549474 podman[85884]: 2025-12-07 09:41:33.233186146 +0000 UTC m=+1.162886085 container remove 26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8 (image=quay.io/ceph/ceph:v19, name=dazzling_babbage, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1239247002' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:41:33 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3140489880' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 04:41:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e4fbaec54dbf3d2d368461d6558b99a3e826aae9834e04f3970595af9a15c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e4fbaec54dbf3d2d368461d6558b99a3e826aae9834e04f3970595af9a15c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e4fbaec54dbf3d2d368461d6558b99a3e826aae9834e04f3970595af9a15c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e4fbaec54dbf3d2d368461d6558b99a3e826aae9834e04f3970595af9a15c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e4fbaec54dbf3d2d368461d6558b99a3e826aae9834e04f3970595af9a15c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.146179662 +0000 UTC m=+0.017891983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:41:33 np0005549474 systemd[1]: libpod-conmon-26929bb42c326f07964d001f5b46df07f15def920fe6cfe2786f374464e02cd8.scope: Deactivated successfully.
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.25611923 +0000 UTC m=+0.127831541 container init bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.265316119 +0000 UTC m=+0.137028410 container start bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.268738002 +0000 UTC m=+0.140450303 container attach bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_goldstine, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:41:33 np0005549474 python3[86115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:33 np0005549474 sweet_goldstine[86082]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:41:33 np0005549474 sweet_goldstine[86082]: --> All data devices are unavailable
Dec  7 04:41:33 np0005549474 systemd[1]: libpod-bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d.scope: Deactivated successfully.
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.577395066 +0000 UTC m=+0.449107367 container died bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a0e4fbaec54dbf3d2d368461d6558b99a3e826aae9834e04f3970595af9a15c6-merged.mount: Deactivated successfully.
Dec  7 04:41:33 np0005549474 podman[86126]: 2025-12-07 09:41:33.607147547 +0000 UTC m=+0.048405246 container create b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a (image=quay.io/ceph/ceph:v19, name=naughty_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:33 np0005549474 podman[86057]: 2025-12-07 09:41:33.624600925 +0000 UTC m=+0.496313216 container remove bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:33 np0005549474 systemd[1]: libpod-conmon-bf303b6252cf79cdf59a53e6fd2c41ea01dc54d47fda945528d865fc16bdae6d.scope: Deactivated successfully.
Dec  7 04:41:33 np0005549474 systemd[1]: Started libpod-conmon-b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a.scope.
Dec  7 04:41:33 np0005549474 podman[86126]: 2025-12-07 09:41:33.582035697 +0000 UTC m=+0.023293436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e303b0742669f72578b4b38c415f6e7b75cc4072f29b0906443eaf063329da3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e303b0742669f72578b4b38c415f6e7b75cc4072f29b0906443eaf063329da3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:33 np0005549474 podman[86126]: 2025-12-07 09:41:33.706964248 +0000 UTC m=+0.148221977 container init b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a (image=quay.io/ceph/ceph:v19, name=naughty_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:33 np0005549474 podman[86126]: 2025-12-07 09:41:33.712537097 +0000 UTC m=+0.153794806 container start b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a (image=quay.io/ceph/ceph:v19, name=naughty_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 04:41:33 np0005549474 podman[86126]: 2025-12-07 09:41:33.717448196 +0000 UTC m=+0.158705915 container attach b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a (image=quay.io/ceph/ceph:v19, name=naughty_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1573311624' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.159683754 +0000 UTC m=+0.060081230 container create 56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_murdock, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1573311624' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec  7 04:41:34 np0005549474 naughty_jepsen[86152]: enabled application 'rbd' on pool 'vms'
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec  7 04:41:34 np0005549474 systemd[1]: Started libpod-conmon-56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819.scope.
Dec  7 04:41:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:34 np0005549474 systemd[1]: libpod-b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a.scope: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86126]: 2025-12-07 09:41:34.208147251 +0000 UTC m=+0.649404970 container died b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a (image=quay.io/ceph/ceph:v19, name=naughty_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.122581481 +0000 UTC m=+0.022979017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.223230168 +0000 UTC m=+0.123627634 container init 56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_murdock, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.231286922 +0000 UTC m=+0.131684358 container start 56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:41:34 np0005549474 boring_murdock[86280]: 167 167
Dec  7 04:41:34 np0005549474 systemd[1]: libpod-56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819.scope: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.236073866 +0000 UTC m=+0.136471322 container attach 56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_murdock, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.236432617 +0000 UTC m=+0.136830053 container died 56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3140489880' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1573311624' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:41:34 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1573311624' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  7 04:41:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5e303b0742669f72578b4b38c415f6e7b75cc4072f29b0906443eaf063329da3-merged.mount: Deactivated successfully.
Dec  7 04:41:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-88e838e1cf6df098409dba044ed1f7f890b543b4334e684518f7dc7843c7ce15-merged.mount: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86262]: 2025-12-07 09:41:34.284888844 +0000 UTC m=+0.185286280 container remove 56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:41:34 np0005549474 systemd[1]: libpod-conmon-56f3dd107a8126538633f8f08d5703c0390eacd6e6c5722fb74b7c65babef819.scope: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86126]: 2025-12-07 09:41:34.297005231 +0000 UTC m=+0.738262930 container remove b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a (image=quay.io/ceph/ceph:v19, name=naughty_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 04:41:34 np0005549474 systemd[1]: libpod-conmon-b1b7be27aad36eea5bfe75370d88fed079316f1813a0547ae4ca411c75b4922a.scope: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86315]: 2025-12-07 09:41:34.421132708 +0000 UTC m=+0.037480795 container create 5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:41:34 np0005549474 systemd[1]: Started libpod-conmon-5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643.scope.
Dec  7 04:41:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8920d4a4f2745aac873acdfa82ef4b9cfef310953ec0a2ea60a682718266d133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8920d4a4f2745aac873acdfa82ef4b9cfef310953ec0a2ea60a682718266d133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8920d4a4f2745aac873acdfa82ef4b9cfef310953ec0a2ea60a682718266d133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8920d4a4f2745aac873acdfa82ef4b9cfef310953ec0a2ea60a682718266d133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:34 np0005549474 podman[86315]: 2025-12-07 09:41:34.40530647 +0000 UTC m=+0.021654577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:41:34 np0005549474 podman[86315]: 2025-12-07 09:41:34.501144231 +0000 UTC m=+0.117492318 container init 5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:34 np0005549474 podman[86315]: 2025-12-07 09:41:34.508169213 +0000 UTC m=+0.124517300 container start 5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:34 np0005549474 podman[86315]: 2025-12-07 09:41:34.52325284 +0000 UTC m=+0.139600927 container attach 5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:41:34 np0005549474 python3[86358]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:34 np0005549474 podman[86362]: 2025-12-07 09:41:34.638929172 +0000 UTC m=+0.036583739 container create 3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e (image=quay.io/ceph/ceph:v19, name=vibrant_grothendieck, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:34 np0005549474 systemd[1]: Started libpod-conmon-3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e.scope.
Dec  7 04:41:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeb6f70abc19ce7cb14060e3d9789a8f46ebdca0e4de176bb1c940a6d158f5c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeb6f70abc19ce7cb14060e3d9789a8f46ebdca0e4de176bb1c940a6d158f5c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:34 np0005549474 podman[86362]: 2025-12-07 09:41:34.701667411 +0000 UTC m=+0.099321998 container init 3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e (image=quay.io/ceph/ceph:v19, name=vibrant_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:34 np0005549474 podman[86362]: 2025-12-07 09:41:34.709872959 +0000 UTC m=+0.107527526 container start 3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e (image=quay.io/ceph/ceph:v19, name=vibrant_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:34 np0005549474 podman[86362]: 2025-12-07 09:41:34.713941013 +0000 UTC m=+0.111595600 container attach 3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e (image=quay.io/ceph/ceph:v19, name=vibrant_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:34 np0005549474 podman[86362]: 2025-12-07 09:41:34.623591108 +0000 UTC m=+0.021245695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]: {
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:    "0": [
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:        {
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "devices": [
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "/dev/loop3"
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            ],
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "lv_name": "ceph_lv0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "lv_size": "21470642176",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "name": "ceph_lv0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "tags": {
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.cluster_name": "ceph",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.crush_device_class": "",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.encrypted": "0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.osd_id": "0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.type": "block",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.vdo": "0",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:                "ceph.with_tpm": "0"
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            },
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "type": "block",
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:            "vg_name": "ceph_vg0"
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:        }
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]:    ]
Dec  7 04:41:34 np0005549474 fervent_matsumoto[86356]: }
Dec  7 04:41:34 np0005549474 systemd[1]: libpod-5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643.scope: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86405]: 2025-12-07 09:41:34.867661967 +0000 UTC m=+0.021004088 container died 5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8920d4a4f2745aac873acdfa82ef4b9cfef310953ec0a2ea60a682718266d133-merged.mount: Deactivated successfully.
Dec  7 04:41:34 np0005549474 podman[86405]: 2025-12-07 09:41:34.905337047 +0000 UTC m=+0.058679168 container remove 5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_matsumoto, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 04:41:34 np0005549474 systemd[1]: libpod-conmon-5c72d8b3270554d660c2af1ff9973f2f5a08b325d791b63c29657ed656830643.scope: Deactivated successfully.
Dec  7 04:41:34 np0005549474 systemd[75863]: Starting Mark boot as successful...
Dec  7 04:41:34 np0005549474 systemd[75863]: Finished Mark boot as successful.
Dec  7 04:41:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v80: 69 pgs: 32 unknown, 1 creating+peering, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"} v 0)
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]: dispatch
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]': finished
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:35 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 23 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=9.742315292s) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active pruub 61.603248596s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=9.742315292s) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown pruub 61.603248596s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.9( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.7( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.8( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.10( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.11( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.12( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.15( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.16( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.13( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.14( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.17( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.18( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.19( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.1( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.2( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.5( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.6( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.3( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 24 pg[3.4( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3615880316' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]: dispatch
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]': finished
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.102:0/2408157544' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]: dispatch
Dec  7 04:41:35 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3615880316' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.468176406 +0000 UTC m=+0.036502506 container create aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:41:35 np0005549474 systemd[1]: Started libpod-conmon-aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502.scope.
Dec  7 04:41:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.545034333 +0000 UTC m=+0.113360443 container init aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.452067958 +0000 UTC m=+0.020394078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.550965872 +0000 UTC m=+0.119291972 container start aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ardinghelli, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.55417909 +0000 UTC m=+0.122505220 container attach aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Dec  7 04:41:35 np0005549474 fervent_ardinghelli[86529]: 167 167
Dec  7 04:41:35 np0005549474 systemd[1]: libpod-aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502.scope: Deactivated successfully.
Dec  7 04:41:35 np0005549474 conmon[86529]: conmon aee9e5ff25161becff4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502.scope/container/memory.events
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.557066337 +0000 UTC m=+0.125392447 container died aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b26315e44da579738dd7ce666a4b7ab3e3dae2cb77285c581d9ed69b41614ff5-merged.mount: Deactivated successfully.
Dec  7 04:41:35 np0005549474 podman[86512]: 2025-12-07 09:41:35.601482872 +0000 UTC m=+0.169809062 container remove aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:35 np0005549474 systemd[1]: libpod-conmon-aee9e5ff25161becff4faed4e41f0c892f60498214482443ab572120ff7d5502.scope: Deactivated successfully.
Dec  7 04:41:35 np0005549474 podman[86552]: 2025-12-07 09:41:35.790164723 +0000 UTC m=+0.042688663 container create d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:35 np0005549474 systemd[1]: Started libpod-conmon-d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464.scope.
Dec  7 04:41:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29abefe55f8b7cd627ab8994186a6744d4721cac914eee904c23ac9afdf0007f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29abefe55f8b7cd627ab8994186a6744d4721cac914eee904c23ac9afdf0007f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29abefe55f8b7cd627ab8994186a6744d4721cac914eee904c23ac9afdf0007f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29abefe55f8b7cd627ab8994186a6744d4721cac914eee904c23ac9afdf0007f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:35 np0005549474 podman[86552]: 2025-12-07 09:41:35.770953521 +0000 UTC m=+0.023477511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:41:35 np0005549474 podman[86552]: 2025-12-07 09:41:35.871382572 +0000 UTC m=+0.123906622 container init d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:35 np0005549474 podman[86552]: 2025-12-07 09:41:35.877543848 +0000 UTC m=+0.130067808 container start d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec  7 04:41:35 np0005549474 podman[86552]: 2025-12-07 09:41:35.881369305 +0000 UTC m=+0.133893295 container attach d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3615880316' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Dec  7 04:41:36 np0005549474 vibrant_grothendieck[86378]: enabled application 'rbd' on pool 'volumes'
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:36 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1f( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1d( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1b( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.8( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1c( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.4( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.9( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1a( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.2( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.3( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.5( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.6( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=23/25 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.b( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.c( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.d( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.a( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.e( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.10( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.f( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.13( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.12( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.7( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.14( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.15( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.11( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.17( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.16( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.19( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.18( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 25 pg[3.1e( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [0] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:36 np0005549474 systemd[1]: libpod-3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e.scope: Deactivated successfully.
Dec  7 04:41:36 np0005549474 podman[86362]: 2025-12-07 09:41:36.039503161 +0000 UTC m=+1.437157728 container died 3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e (image=quay.io/ceph/ceph:v19, name=vibrant_grothendieck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ceeb6f70abc19ce7cb14060e3d9789a8f46ebdca0e4de176bb1c940a6d158f5c-merged.mount: Deactivated successfully.
Dec  7 04:41:36 np0005549474 podman[86362]: 2025-12-07 09:41:36.081006718 +0000 UTC m=+1.478661275 container remove 3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e (image=quay.io/ceph/ceph:v19, name=vibrant_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:41:36 np0005549474 systemd[1]: libpod-conmon-3abc21cb60cc4e0ca11eeba8ca3745f8a86664ce30ff18131799608df2749a2e.scope: Deactivated successfully.
Dec  7 04:41:36 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 8 completed events
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3615880316' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:36 np0005549474 python3[86642]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:36 np0005549474 podman[86674]: 2025-12-07 09:41:36.40379229 +0000 UTC m=+0.037739304 container create 9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7 (image=quay.io/ceph/ceph:v19, name=romantic_merkle, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:41:36 np0005549474 systemd[1]: Started libpod-conmon-9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7.scope.
Dec  7 04:41:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b878957165085bcac8841fb264e93a1141e86cd1bfcecba1593ef392a83e2412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b878957165085bcac8841fb264e93a1141e86cd1bfcecba1593ef392a83e2412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv started
Dec  7 04:41:36 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mgr.compute-1.buauyv 192.168.122.101:0/467082124; not ready for session (expect reconnect)
Dec  7 04:41:36 np0005549474 podman[86674]: 2025-12-07 09:41:36.388498747 +0000 UTC m=+0.022445791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:36 np0005549474 podman[86674]: 2025-12-07 09:41:36.488697819 +0000 UTC m=+0.122644853 container init 9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7 (image=quay.io/ceph/ceph:v19, name=romantic_merkle, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:41:36 np0005549474 lvm[86700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:41:36 np0005549474 lvm[86700]: VG ceph_vg0 finished
Dec  7 04:41:36 np0005549474 podman[86674]: 2025-12-07 09:41:36.49564958 +0000 UTC m=+0.129596594 container start 9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7 (image=quay.io/ceph/ceph:v19, name=romantic_merkle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:41:36 np0005549474 podman[86674]: 2025-12-07 09:41:36.49859729 +0000 UTC m=+0.132544304 container attach 9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7 (image=quay.io/ceph/ceph:v19, name=romantic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:41:36 np0005549474 eloquent_buck[86568]: {}
Dec  7 04:41:36 np0005549474 systemd[1]: libpod-d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464.scope: Deactivated successfully.
Dec  7 04:41:36 np0005549474 podman[86552]: 2025-12-07 09:41:36.569736013 +0000 UTC m=+0.822259963 container died d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_buck, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:41:36 np0005549474 systemd[1]: libpod-d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464.scope: Consumed 1.025s CPU time.
Dec  7 04:41:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-29abefe55f8b7cd627ab8994186a6744d4721cac914eee904c23ac9afdf0007f-merged.mount: Deactivated successfully.
Dec  7 04:41:36 np0005549474 podman[86552]: 2025-12-07 09:41:36.609375753 +0000 UTC m=+0.861899703 container remove d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_buck, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:41:36 np0005549474 systemd[1]: libpod-conmon-d2994b2b2de41fc93af8997440da87c1260ad0b62c6489becf7e981a4d6a7464.scope: Deactivated successfully.
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  7 04:41:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2197147500' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec  7 04:41:36 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec  7 04:41:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v83: 69 pgs: 31 unknown, 1 creating+peering, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:37 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mgr.compute-1.buauyv 192.168.122.101:0/467082124; not ready for session (expect reconnect)
Dec  7 04:41:37 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec  7 04:41:37 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec  7 04:41:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  7 04:41:38 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from mgr.compute-1.buauyv 192.168.122.101:0/467082124; not ready for session (expect reconnect)
Dec  7 04:41:38 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:38 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:38 np0005549474 ceph-mon[74516]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:38 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2197147500' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  7 04:41:38 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  7 04:41:38 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  7 04:41:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2197147500' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Dec  7 04:41:39 np0005549474 romantic_merkle[86695]: enabled application 'rbd' on pool 'backups'
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"} v 0)
Dec  7 04:41:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec  7 04:41:39 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:39 np0005549474 systemd[1]: libpod-9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7.scope: Deactivated successfully.
Dec  7 04:41:39 np0005549474 podman[86674]: 2025-12-07 09:41:39.283678932 +0000 UTC m=+2.917625966 container died 9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7 (image=quay.io/ceph/ceph:v19, name=romantic_merkle, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:41:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b878957165085bcac8841fb264e93a1141e86cd1bfcecba1593ef392a83e2412-merged.mount: Deactivated successfully.
Dec  7 04:41:39 np0005549474 podman[86674]: 2025-12-07 09:41:39.371271684 +0000 UTC m=+3.005218698 container remove 9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7 (image=quay.io/ceph/ceph:v19, name=romantic_merkle, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:41:39 np0005549474 systemd[1]: libpod-conmon-9577820b265d058bbada1e89549550f269322e05c2d85f939a8eee578127a1a7.scope: Deactivated successfully.
Dec  7 04:41:39 np0005549474 python3[86774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:39 np0005549474 podman[86775]: 2025-12-07 09:41:39.694623523 +0000 UTC m=+0.023228954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:39 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  7 04:41:39 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  7 04:41:40 np0005549474 podman[86775]: 2025-12-07 09:41:40.011330609 +0000 UTC m=+0.339936000 container create a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de (image=quay.io/ceph/ceph:v19, name=jolly_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2197147500' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  7 04:41:40 np0005549474 systemd[1]: Started libpod-conmon-a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de.scope.
Dec  7 04:41:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a80d0278217f6c915db69767c7c12e0a1e8ba0fe556843d87dba9f6b38f70eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a80d0278217f6c915db69767c7c12e0a1e8ba0fe556843d87dba9f6b38f70eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:40 np0005549474 podman[86775]: 2025-12-07 09:41:40.220185262 +0000 UTC m=+0.548790693 container init a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de (image=quay.io/ceph/ceph:v19, name=jolly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:41:40 np0005549474 podman[86775]: 2025-12-07 09:41:40.23070329 +0000 UTC m=+0.559308681 container start a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de (image=quay.io/ceph/ceph:v19, name=jolly_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:41:40 np0005549474 podman[86775]: 2025-12-07 09:41:40.238220238 +0000 UTC m=+0.566825669 container attach a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de (image=quay.io/ceph/ceph:v19, name=jolly_lumiere, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:40 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.1c( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751556396s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.877548218s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.1a( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751712799s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.877761841s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.1c( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751514435s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.877548218s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.1d( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751424789s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.877464294s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.9( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751653671s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.877738953s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.1a( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751682281s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.877761841s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.1d( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751366615s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.877464294s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.9( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.751630783s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.877738953s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.3( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756937027s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883216858s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.3( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756922722s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883216858s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.5( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756834030s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883331299s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.a( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756932259s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883453369s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.c( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756863594s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883399963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.5( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756806374s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883331299s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.c( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756846428s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883399963s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.a( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756912231s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883453369s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.d( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756749153s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883422852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.d( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756714821s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883422852s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.f( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.757042885s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883773804s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.e( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756716728s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883468628s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.10( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.757013321s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883773804s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.f( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.757020950s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883773804s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.e( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756692886s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883468628s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.10( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756994247s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883773804s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.11( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.757049561s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883895874s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.11( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.757036209s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883895874s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.14( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756946564s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883872986s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.13( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756862640s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883789062s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.15( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756951332s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883895874s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.16( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.757012367s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 68.883964539s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.14( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756934166s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883872986s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.16( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756999969s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883964539s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.15( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756930351s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883895874s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[3.13( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27 pruub=11.756837845s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.883789062s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.1f( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.a( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.1e( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.6( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.9( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.4( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.1( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.d( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.c( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.10( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.e( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.13( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.15( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.19( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 27 pg[2.1b( empty local-lis/les=0/0 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1301740075' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1e deep-scrub starts
Dec  7 04:41:40 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1e deep-scrub ok
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:41:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:41:40 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec  7 04:41:40 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec  7 04:41:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1301740075' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: Deploying daemon osd.2 on compute-2
Dec  7 04:41:41 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event f6807fa8-03b2-44fa-8eae-63bd0b3ffef4 (Global Recovery Event) in 10 seconds
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  7 04:41:41 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  7 04:41:41 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  7 04:41:41 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1301740075' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  7 04:41:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Dec  7 04:41:42 np0005549474 jolly_lumiere[86790]: enabled application 'rbd' on pool 'images'
Dec  7 04:41:42 np0005549474 systemd[1]: libpod-a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de.scope: Deactivated successfully.
Dec  7 04:41:42 np0005549474 podman[86775]: 2025-12-07 09:41:42.636488882 +0000 UTC m=+2.965094263 container died a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de (image=quay.io/ceph/ceph:v19, name=jolly_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 04:41:42 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  7 04:41:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Dec  7 04:41:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v89: 69 pgs: 32 peering, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:43 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  7 04:41:43 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  7 04:41:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:44 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:44 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  7 04:41:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2a80d0278217f6c915db69767c7c12e0a1e8ba0fe556843d87dba9f6b38f70eb-merged.mount: Deactivated successfully.
Dec  7 04:41:44 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  7 04:41:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v90: 69 pgs: 32 peering, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27) [0] r=0 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  7 04:41:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:45 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  7 04:41:46 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 9 completed events
Dec  7 04:41:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:41:46 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  7 04:41:46 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec  7 04:41:46 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 2 active+clean+scrubbing, 17 activating, 15 peering, 35 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:47 np0005549474 ceph-mon[74516]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:47 np0005549474 podman[86775]: 2025-12-07 09:41:47.228281978 +0000 UTC m=+7.556887359 container remove a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de (image=quay.io/ceph/ceph:v19, name=jolly_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:47 np0005549474 systemd[1]: libpod-conmon-a1a680e7bd73061403b7b371b7de3d107cd50d976f60e4016240949fcac782de.scope: Deactivated successfully.
Dec  7 04:41:47 np0005549474 python3[86857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:47 np0005549474 podman[86858]: 2025-12-07 09:41:47.631843966 +0000 UTC m=+0.042695653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:48 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec  7 04:41:48 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  7 04:41:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v92: 69 pgs: 3 active+clean+scrubbing, 17 activating, 49 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:49 np0005549474 podman[86858]: 2025-12-07 09:41:49.266565703 +0000 UTC m=+1.677417340 container create bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:41:49 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  7 04:41:49 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  7 04:41:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  7 04:41:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  7 04:41:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  7 04:41:50 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec  7 04:41:50 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  7 04:41:50 np0005549474 systemd[1]: Started libpod-conmon-bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99.scope.
Dec  7 04:41:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90886602a2fdd1d8f7b42d4b1ff01c60dcbe544fad941972e4a8a56cd4e9584e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90886602a2fdd1d8f7b42d4b1ff01c60dcbe544fad941972e4a8a56cd4e9584e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 3 active+clean+scrubbing, 17 activating, 49 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:51 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:51 np0005549474 podman[86858]: 2025-12-07 09:41:51.41255107 +0000 UTC m=+3.823402807 container init bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:41:51 np0005549474 podman[86858]: 2025-12-07 09:41:51.428997638 +0000 UTC m=+3.839849325 container start bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:41:51 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1040799493' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Dec  7 04:41:51 np0005549474 podman[86858]: 2025-12-07 09:41:51.956870237 +0000 UTC m=+4.367721874 container attach bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1301740075' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: from='osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  7 04:41:51 np0005549474 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec  7 04:41:52 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  7 04:41:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v95: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v96: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1040799493' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Dec  7 04:41:56 np0005549474 gracious_pascal[86874]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1040799493' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: from='osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842837334s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372222900s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[3.1b( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=30 pruub=11.720272064s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 85.249671936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842837334s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372222900s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[3.1b( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=30 pruub=11.720272064s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249671936s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[3.8( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=30 pruub=11.720096588s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 85.249671936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[3.8( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=30 pruub=11.720096588s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249671936s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[3.0( empty local-lis/les=23/25 n=0 ec=17/17 lis/c=23/23 les/c/f=25/25/0 sis=30 pruub=11.719950676s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 85.249687195s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[3.0( empty local-lis/les=23/25 n=0 ec=17/17 lis/c=23/23 les/c/f=25/25/0 sis=30 pruub=11.719950676s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249687195s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=15.842556000s) [] r=-1 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372337341s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=15.842556000s) [] r=-1 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372337341s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842473984s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372306824s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842473984s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372306824s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842345238s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372261047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842350960s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372283936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842345238s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372261047s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842350960s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372283936s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842249870s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372314453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842249870s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372314453s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842210770s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372314453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842210770s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372314453s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842172623s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 89.372322083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:41:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=15.842172623s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372322083s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:56 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:56 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/97309485; not ready for session (expect reconnect)
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:56 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:56 np0005549474 systemd[1]: libpod-bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99.scope: Deactivated successfully.
Dec  7 04:41:56 np0005549474 podman[86858]: 2025-12-07 09:41:56.700977235 +0000 UTC m=+9.111828882 container died bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-90886602a2fdd1d8f7b42d4b1ff01c60dcbe544fad941972e4a8a56cd4e9584e-merged.mount: Deactivated successfully.
Dec  7 04:41:56 np0005549474 podman[86858]: 2025-12-07 09:41:56.737415398 +0000 UTC m=+9.148267025 container remove bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 04:41:56 np0005549474 systemd[1]: libpod-conmon-bf6b0b791167aa6f84ce9a44a2eae36c45238d02ac2445e269581d96500a4a99.scope: Deactivated successfully.
Dec  7 04:41:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v98: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:57 np0005549474 python3[86936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:41:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:41:57 np0005549474 podman[86937]: 2025-12-07 09:41:57.060954533 +0000 UTC m=+0.022825462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:41:57 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  7 04:41:57 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/97309485; not ready for session (expect reconnect)
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:41:57 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:41:58 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  7 04:41:58 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/97309485; not ready for session (expect reconnect)
Dec  7 04:41:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v99: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:41:59 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:59 np0005549474 podman[86937]: 2025-12-07 09:41:59.307134191 +0000 UTC m=+2.269005120 container create 021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6 (image=quay.io/ceph/ceph:v19, name=reverent_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:41:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1040799493' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  7 04:41:59 np0005549474 systemd[1]: Started libpod-conmon-021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6.scope.
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:41:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:41:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bab9b68e1e6b45b28edcf17540e635a28413712f3b0eb5f8e730976cac1e1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bab9b68e1e6b45b28edcf17540e635a28413712f3b0eb5f8e730976cac1e1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:41:59 np0005549474 podman[86937]: 2025-12-07 09:41:59.432454906 +0000 UTC m=+2.394325815 container init 021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6 (image=quay.io/ceph/ceph:v19, name=reverent_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:41:59 np0005549474 podman[86937]: 2025-12-07 09:41:59.437655252 +0000 UTC m=+2.399526171 container start 021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6 (image=quay.io/ceph/ceph:v19, name=reverent_cori, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:41:59 np0005549474 podman[86937]: 2025-12-07 09:41:59.561614466 +0000 UTC m=+2.523485365 container attach 021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6 (image=quay.io/ceph/ceph:v19, name=reverent_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:41:59 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/97309485; not ready for session (expect reconnect)
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:41:59 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:41:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Dec  7 04:41:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  7 04:41:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4151627428' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  7 04:42:00 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  7 04:42:00 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/97309485; not ready for session (expect reconnect)
Dec  7 04:42:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v100: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 04:42:01 np0005549474 ceph-mgr[74811]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/97309485; not ready for session (expect reconnect)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:02 np0005549474 ceph-mgr[74811]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/4151627428' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4151627428' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  7 04:42:02 np0005549474 reverent_cori[86953]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485] boot
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:02 np0005549474 systemd[1]: libpod-021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6.scope: Deactivated successfully.
Dec  7 04:42:02 np0005549474 podman[86937]: 2025-12-07 09:42:02.254534598 +0000 UTC m=+5.216405517 container died 021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6 (image=quay.io/ceph/ceph:v19, name=reverent_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:02 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a1bab9b68e1e6b45b28edcf17540e635a28413712f3b0eb5f8e730976cac1e1d-merged.mount: Deactivated successfully.
Dec  7 04:42:02 np0005549474 podman[86937]: 2025-12-07 09:42:02.332423936 +0000 UTC m=+5.294294835 container remove 021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6 (image=quay.io/ceph/ceph:v19, name=reverent_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:02 np0005549474 systemd[1]: libpod-conmon-021b06efc728c4644c7dedb21b89c663a3eeae50d1d05abc7f75f1409a72a6a6.scope: Deactivated successfully.
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.1b( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842820168s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372222900s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.1b( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842795372s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372222900s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=31 pruub=5.720024586s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249671936s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=9.842662811s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372337341s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=31 pruub=5.720228672s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249671936s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=9.842646599s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372337341s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=31 pruub=5.719981194s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249671936s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[3.0( empty local-lis/les=23/25 n=0 ec=17/17 lis/c=23/23 les/c/f=25/25/0 sis=31 pruub=5.719978809s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249687195s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[3.0( empty local-lis/les=23/25 n=0 ec=17/17 lis/c=23/23 les/c/f=25/25/0 sis=31 pruub=5.719964504s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249687195s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=23/25 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=31 pruub=5.719889641s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.249671936s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.a( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842491150s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372306824s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.a( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842475891s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372306824s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.d( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842422485s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372261047s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.d( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842409134s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372261047s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.c( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842387199s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372283936s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.c( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842371941s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372283936s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.13( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842367172s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372314453s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.10( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842362404s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372314453s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.13( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842356682s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372314453s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.10( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842348099s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372314453s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.15( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842263222s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372322083s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:42:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 31 pg[2.15( empty local-lis/les=27/28 n=0 ec=19/15 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=9.842248917s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.372322083s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v102: 69 pgs: 24 peering, 2 active+clean+scrubbing, 43 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: OSD bench result of 6088.782708 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/4151627428' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485] boot
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  7 04:42:03 np0005549474 python3[87171]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  7 04:42:03 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  7 04:42:03 np0005549474 python3[87242]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100522.9977846-37194-115671197504680/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:42:04 np0005549474 python3[87344]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:42:04 np0005549474 ceph-mon[74516]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 04:42:04 np0005549474 ceph-mon[74516]: Cluster is now healthy
Dec  7 04:42:04 np0005549474 python3[87419]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100523.875261-37208-5730627384993/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b70e0b2bd8df634bfc06f75be47b574ced566c62 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:42:04 np0005549474 python3[87469]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:04 np0005549474 podman[87470]: 2025-12-07 09:42:04.847929268 +0000 UTC m=+0.045565341 container create aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07 (image=quay.io/ceph/ceph:v19, name=funny_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:04 np0005549474 systemd[1]: Started libpod-conmon-aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07.scope.
Dec  7 04:42:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:04 np0005549474 podman[87470]: 2025-12-07 09:42:04.824029404 +0000 UTC m=+0.021665497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f1929216aa8cba65dd5df4d57590093080f355025e9b571d55990c97fc4359/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f1929216aa8cba65dd5df4d57590093080f355025e9b571d55990c97fc4359/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f1929216aa8cba65dd5df4d57590093080f355025e9b571d55990c97fc4359/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:04 np0005549474 podman[87470]: 2025-12-07 09:42:04.942922184 +0000 UTC m=+0.140558277 container init aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07 (image=quay.io/ceph/ceph:v19, name=funny_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:04 np0005549474 podman[87470]: 2025-12-07 09:42:04.948913625 +0000 UTC m=+0.146549688 container start aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07 (image=quay.io/ceph/ceph:v19, name=funny_burnell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:42:04 np0005549474 podman[87470]: 2025-12-07 09:42:04.96096686 +0000 UTC m=+0.158602943 container attach aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07 (image=quay.io/ceph/ceph:v19, name=funny_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:42:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v104: 69 pgs: 24 peering, 2 active+clean+scrubbing, 43 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2148721283' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 04:42:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2148721283' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 04:42:05 np0005549474 funny_burnell[87485]: 
Dec  7 04:42:05 np0005549474 funny_burnell[87485]: [global]
Dec  7 04:42:05 np0005549474 funny_burnell[87485]: #011fsid = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec  7 04:42:05 np0005549474 funny_burnell[87485]: #011mon_host = 192.168.122.100
Dec  7 04:42:05 np0005549474 systemd[1]: libpod-aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07.scope: Deactivated successfully.
Dec  7 04:42:05 np0005549474 podman[87603]: 2025-12-07 09:42:05.416687956 +0000 UTC m=+0.028549286 container died aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07 (image=quay.io/ceph/ceph:v19, name=funny_burnell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-25f1929216aa8cba65dd5df4d57590093080f355025e9b571d55990c97fc4359-merged.mount: Deactivated successfully.
Dec  7 04:42:05 np0005549474 podman[87603]: 2025-12-07 09:42:05.45152719 +0000 UTC m=+0.063388510 container remove aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07 (image=quay.io/ceph/ceph:v19, name=funny_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:42:05 np0005549474 systemd[1]: libpod-conmon-aa0763b85d01066ded8f59a900367048c5ea9ac6107396e13f4cb14926fd6c07.scope: Deactivated successfully.
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:05 np0005549474 python3[87744]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:05 np0005549474 podman[87796]: 2025-12-07 09:42:05.825054388 +0000 UTC m=+0.038621160 container create 2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3 (image=quay.io/ceph/ceph:v19, name=charming_bose, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:05 np0005549474 systemd[1]: Started libpod-conmon-2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3.scope.
Dec  7 04:42:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416b7de452dc0c078a611dc2f345c6a1072bc75ffa45c97395efce04e5e4461a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416b7de452dc0c078a611dc2f345c6a1072bc75ffa45c97395efce04e5e4461a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416b7de452dc0c078a611dc2f345c6a1072bc75ffa45c97395efce04e5e4461a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:05 np0005549474 podman[87796]: 2025-12-07 09:42:05.897407058 +0000 UTC m=+0.110973860 container init 2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3 (image=quay.io/ceph/ceph:v19, name=charming_bose, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:42:05 np0005549474 podman[87796]: 2025-12-07 09:42:05.806063393 +0000 UTC m=+0.019630185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:05 np0005549474 podman[87796]: 2025-12-07 09:42:05.904815993 +0000 UTC m=+0.118382765 container start 2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3 (image=quay.io/ceph/ceph:v19, name=charming_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:05 np0005549474 podman[87796]: 2025-12-07 09:42:05.90803712 +0000 UTC m=+0.121603922 container attach 2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3 (image=quay.io/ceph/ceph:v19, name=charming_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2148721283' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2148721283' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/698108247' entity='client.admin' 
Dec  7 04:42:06 np0005549474 charming_bose[87837]: set ssl_option
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:06 np0005549474 systemd[1]: libpod-2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3.scope: Deactivated successfully.
Dec  7 04:42:06 np0005549474 podman[87796]: 2025-12-07 09:42:06.398327433 +0000 UTC m=+0.611894245 container died 2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3 (image=quay.io/ceph/ceph:v19, name=charming_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-416b7de452dc0c078a611dc2f345c6a1072bc75ffa45c97395efce04e5e4461a-merged.mount: Deactivated successfully.
Dec  7 04:42:06 np0005549474 podman[87796]: 2025-12-07 09:42:06.443914772 +0000 UTC m=+0.657481574 container remove 2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3 (image=quay.io/ceph/ceph:v19, name=charming_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:42:06 np0005549474 systemd[1]: libpod-conmon-2bb4415afcb6cf3ea22a5b61ef17c77133fd27de2e654b5b6ff00ec7f2e33ba3.scope: Deactivated successfully.
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:06 np0005549474 python3[88073]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:06 np0005549474 podman[88094]: 2025-12-07 09:42:06.780166582 +0000 UTC m=+0.034847196 container create 768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7 (image=quay.io/ceph/ceph:v19, name=festive_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:06 np0005549474 systemd[1]: Started libpod-conmon-768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7.scope.
Dec  7 04:42:06 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203eab4e4c61346f4d1b6e670a4fd4c40468bc2b92cc91c12fc2d4c34035eb7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203eab4e4c61346f4d1b6e670a4fd4c40468bc2b92cc91c12fc2d4c34035eb7d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203eab4e4c61346f4d1b6e670a4fd4c40468bc2b92cc91c12fc2d4c34035eb7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:06 np0005549474 podman[88094]: 2025-12-07 09:42:06.860095922 +0000 UTC m=+0.114776536 container init 768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7 (image=quay.io/ceph/ceph:v19, name=festive_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:42:06 np0005549474 podman[88094]: 2025-12-07 09:42:06.764544199 +0000 UTC m=+0.019224843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:06 np0005549474 podman[88094]: 2025-12-07 09:42:06.866811385 +0000 UTC m=+0.121491999 container start 768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7 (image=quay.io/ceph/ceph:v19, name=festive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:06 np0005549474 podman[88094]: 2025-12-07 09:42:06.871126986 +0000 UTC m=+0.125807600 container attach 768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7 (image=quay.io/ceph/ceph:v19, name=festive_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v105: 69 pgs: 24 peering, 2 active+clean+scrubbing, 43 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:07 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:07 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:07 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 04:42:07 np0005549474 podman[88201]: 2025-12-07 09:42:07.229542936 +0000 UTC m=+0.031812555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:07 np0005549474 podman[88201]: 2025-12-07 09:42:07.962808635 +0000 UTC m=+0.765078274 container create d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:42:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:07 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec  7 04:42:07 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec  7 04:42:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/698108247' entity='client.admin' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:42:08 np0005549474 systemd[1]: Started libpod-conmon-d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688.scope.
Dec  7 04:42:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:08 np0005549474 festive_dirac[88137]: Scheduled rgw.rgw update...
Dec  7 04:42:08 np0005549474 festive_dirac[88137]: Scheduled ingress.rgw.default update...
Dec  7 04:42:08 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:08 np0005549474 systemd[1]: libpod-768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7.scope: Deactivated successfully.
Dec  7 04:42:08 np0005549474 podman[88094]: 2025-12-07 09:42:08.049378914 +0000 UTC m=+1.304059528 container died 768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7 (image=quay.io/ceph/ceph:v19, name=festive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:08 np0005549474 podman[88201]: 2025-12-07 09:42:08.084026614 +0000 UTC m=+0.886296253 container init d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:42:08 np0005549474 systemd[1]: var-lib-containers-storage-overlay-203eab4e4c61346f4d1b6e670a4fd4c40468bc2b92cc91c12fc2d4c34035eb7d-merged.mount: Deactivated successfully.
Dec  7 04:42:08 np0005549474 podman[88201]: 2025-12-07 09:42:08.092030866 +0000 UTC m=+0.894300485 container start d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 04:42:08 np0005549474 loving_black[88219]: 167 167
Dec  7 04:42:08 np0005549474 systemd[1]: libpod-d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688.scope: Deactivated successfully.
Dec  7 04:42:08 np0005549474 podman[88094]: 2025-12-07 09:42:08.122844589 +0000 UTC m=+1.377525203 container remove 768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7 (image=quay.io/ceph/ceph:v19, name=festive_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:08 np0005549474 python3[88322]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:42:08 np0005549474 python3[88393]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100528.2619288-37227-169615500875109/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:42:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v106: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:09 np0005549474 podman[88201]: 2025-12-07 09:42:09.360761715 +0000 UTC m=+2.163031424 container attach d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:42:09 np0005549474 podman[88201]: 2025-12-07 09:42:09.361533698 +0000 UTC m=+2.163803347 container died d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:42:09 np0005549474 python3[88443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:09 np0005549474 ceph-mon[74516]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:09 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:09 np0005549474 ceph-mon[74516]: Saving service ingress.rgw.default spec with placement count:2
Dec  7 04:42:09 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4cd2e6e0622f5e1e29be46fa6e6e9564b1055c289837fd3360df6f7ea9bff712-merged.mount: Deactivated successfully.
Dec  7 04:42:09 np0005549474 podman[88201]: 2025-12-07 09:42:09.773107297 +0000 UTC m=+2.575376926 container remove d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:09 np0005549474 podman[88444]: 2025-12-07 09:42:09.81451118 +0000 UTC m=+0.182235678 container create 642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90 (image=quay.io/ceph/ceph:v19, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:09 np0005549474 systemd[1]: Started libpod-conmon-642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90.scope.
Dec  7 04:42:09 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:09 np0005549474 systemd[1]: libpod-conmon-d9463ad52ce1198beb4bdc9138baf480b0235d82ed3aeb0211464de6ceb67688.scope: Deactivated successfully.
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed344c38c1bb7014e325116e6977001f296408af2ecafb5ecf6d0f5ca1d48526/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed344c38c1bb7014e325116e6977001f296408af2ecafb5ecf6d0f5ca1d48526/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed344c38c1bb7014e325116e6977001f296408af2ecafb5ecf6d0f5ca1d48526/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 podman[88444]: 2025-12-07 09:42:09.884959393 +0000 UTC m=+0.252683911 container init 642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90 (image=quay.io/ceph/ceph:v19, name=xenodochial_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:42:09 np0005549474 podman[88444]: 2025-12-07 09:42:09.79472395 +0000 UTC m=+0.162448468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:09 np0005549474 podman[88444]: 2025-12-07 09:42:09.896122191 +0000 UTC m=+0.263846689 container start 642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90 (image=quay.io/ceph/ceph:v19, name=xenodochial_bose, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:42:09 np0005549474 podman[88444]: 2025-12-07 09:42:09.904152225 +0000 UTC m=+0.271876753 container attach 642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90 (image=quay.io/ceph/ceph:v19, name=xenodochial_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:42:09 np0005549474 podman[88469]: 2025-12-07 09:42:09.930332057 +0000 UTC m=+0.049201331 container create fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:42:09 np0005549474 systemd[1]: Started libpod-conmon-fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8.scope.
Dec  7 04:42:09 np0005549474 systemd[1]: libpod-conmon-768e6e6cda6724be76358105276d52b98d66d1044d4377d51e8698079d9ddae7.scope: Deactivated successfully.
Dec  7 04:42:09 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc946af95ffe85f440fd24a70c0150b341ddbce83f691809f158a1d3c50e8b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc946af95ffe85f440fd24a70c0150b341ddbce83f691809f158a1d3c50e8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc946af95ffe85f440fd24a70c0150b341ddbce83f691809f158a1d3c50e8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc946af95ffe85f440fd24a70c0150b341ddbce83f691809f158a1d3c50e8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc946af95ffe85f440fd24a70c0150b341ddbce83f691809f158a1d3c50e8b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:10 np0005549474 podman[88469]: 2025-12-07 09:42:09.91161637 +0000 UTC m=+0.030485674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:10 np0005549474 podman[88469]: 2025-12-07 09:42:10.018724492 +0000 UTC m=+0.137593766 container init fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:42:10 np0005549474 podman[88469]: 2025-12-07 09:42:10.02627065 +0000 UTC m=+0.145139924 container start fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:42:10 np0005549474 podman[88469]: 2025-12-07 09:42:10.029997373 +0000 UTC m=+0.148866657 container attach fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  7 04:42:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:10 np0005549474 xenodochial_bose[88461]: Scheduled node-exporter update...
Dec  7 04:42:10 np0005549474 xenodochial_bose[88461]: Scheduled grafana update...
Dec  7 04:42:10 np0005549474 xenodochial_bose[88461]: Scheduled prometheus update...
Dec  7 04:42:10 np0005549474 xenodochial_bose[88461]: Scheduled alertmanager update...
Dec  7 04:42:10 np0005549474 quirky_booth[88486]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:42:10 np0005549474 quirky_booth[88486]: --> All data devices are unavailable
Dec  7 04:42:10 np0005549474 systemd[1]: libpod-642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90.scope: Deactivated successfully.
Dec  7 04:42:10 np0005549474 podman[88444]: 2025-12-07 09:42:10.369707838 +0000 UTC m=+0.737432336 container died 642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90 (image=quay.io/ceph/ceph:v19, name=xenodochial_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:42:10 np0005549474 systemd[1]: libpod-fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8.scope: Deactivated successfully.
Dec  7 04:42:10 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ed344c38c1bb7014e325116e6977001f296408af2ecafb5ecf6d0f5ca1d48526-merged.mount: Deactivated successfully.
Dec  7 04:42:10 np0005549474 podman[88469]: 2025-12-07 09:42:10.412113642 +0000 UTC m=+0.530982926 container died fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:10 np0005549474 podman[88444]: 2025-12-07 09:42:10.435817849 +0000 UTC m=+0.803542357 container remove 642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90 (image=quay.io/ceph/ceph:v19, name=xenodochial_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:42:10 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ecc946af95ffe85f440fd24a70c0150b341ddbce83f691809f158a1d3c50e8b9-merged.mount: Deactivated successfully.
Dec  7 04:42:10 np0005549474 systemd[1]: libpod-conmon-642a45b5bccaf215b5208c098aba343685847eed1465e9b6fdc45acacf22aa90.scope: Deactivated successfully.
Dec  7 04:42:10 np0005549474 podman[88469]: 2025-12-07 09:42:10.486434121 +0000 UTC m=+0.605303395 container remove fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:42:10 np0005549474 systemd[1]: libpod-conmon-fce3d5eecd1490bf75508c0a9c39a3e592075b9c9f3c0230169ba48bcd54c0f8.scope: Deactivated successfully.
Dec  7 04:42:10 np0005549474 python3[88620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v107: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:11 np0005549474 podman[88640]: 2025-12-07 09:42:10.932579018 +0000 UTC m=+0.026001539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:11 np0005549474 podman[88640]: 2025-12-07 09:42:11.831848482 +0000 UTC m=+0.925270963 container create 87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2 (image=quay.io/ceph/ceph:v19, name=crazy_curie, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:11 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:11 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:11 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:11 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:11 np0005549474 systemd[1]: Started libpod-conmon-87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2.scope.
Dec  7 04:42:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9f1f8928cc9607395c4a8585da62037299eaf4b19161d3fc6cd160ac30dffd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9f1f8928cc9607395c4a8585da62037299eaf4b19161d3fc6cd160ac30dffd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9f1f8928cc9607395c4a8585da62037299eaf4b19161d3fc6cd160ac30dffd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:12 np0005549474 podman[88640]: 2025-12-07 09:42:12.372449217 +0000 UTC m=+1.465871758 container init 87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2 (image=quay.io/ceph/ceph:v19, name=crazy_curie, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:12 np0005549474 podman[88640]: 2025-12-07 09:42:12.385584974 +0000 UTC m=+1.479007485 container start 87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2 (image=quay.io/ceph/ceph:v19, name=crazy_curie, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:12 np0005549474 podman[88640]: 2025-12-07 09:42:12.43233963 +0000 UTC m=+1.525762141 container attach 87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2 (image=quay.io/ceph/ceph:v19, name=crazy_curie, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.566173712 +0000 UTC m=+0.067768823 container create ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_dubinsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:42:12 np0005549474 systemd[1]: Started libpod-conmon-ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1.scope.
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.525758048 +0000 UTC m=+0.027353189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.643409249 +0000 UTC m=+0.145004410 container init ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.650417932 +0000 UTC m=+0.152013053 container start ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:12 np0005549474 objective_dubinsky[88716]: 167 167
Dec  7 04:42:12 np0005549474 systemd[1]: libpod-ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1.scope: Deactivated successfully.
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.657874958 +0000 UTC m=+0.159470069 container attach ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.658525758 +0000 UTC m=+0.160120889 container died ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_dubinsky, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-49b302bfd65b3a98d7948f314f6e0f4e45514cb472c1d23c3f8bc0290c40c634-merged.mount: Deactivated successfully.
Dec  7 04:42:12 np0005549474 podman[88681]: 2025-12-07 09:42:12.724134673 +0000 UTC m=+0.225729824 container remove ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_dubinsky, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:12 np0005549474 systemd[1]: libpod-conmon-ae5e3e27d223037ddd9c9389a79310e90e5050a922951fbc83ccc89f5502c6e1.scope: Deactivated successfully.
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1876920365' entity='client.admin' 
Dec  7 04:42:12 np0005549474 systemd[1]: libpod-87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2.scope: Deactivated successfully.
Dec  7 04:42:12 np0005549474 conmon[88673]: conmon 87ab610b3fed38e0622d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2.scope/container/memory.events
Dec  7 04:42:12 np0005549474 podman[88640]: 2025-12-07 09:42:12.852260352 +0000 UTC m=+1.945682823 container died 87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2 (image=quay.io/ceph/ceph:v19, name=crazy_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: Saving service node-exporter spec with placement *
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: Saving service grafana spec with placement compute-0;count:1
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: Saving service prometheus spec with placement compute-0;count:1
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: Saving service alertmanager spec with placement compute-0;count:1
Dec  7 04:42:12 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1876920365' entity='client.admin' 
Dec  7 04:42:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-cc9f1f8928cc9607395c4a8585da62037299eaf4b19161d3fc6cd160ac30dffd-merged.mount: Deactivated successfully.
Dec  7 04:42:12 np0005549474 podman[88640]: 2025-12-07 09:42:12.966165101 +0000 UTC m=+2.059587612 container remove 87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2 (image=quay.io/ceph/ceph:v19, name=crazy_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:42:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v108: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:12 np0005549474 podman[88741]: 2025-12-07 09:42:12.894112779 +0000 UTC m=+0.025321067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:12 np0005549474 podman[88741]: 2025-12-07 09:42:12.999180509 +0000 UTC m=+0.130388777 container create 804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_proskuriakova, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:13 np0005549474 systemd[1]: libpod-conmon-87ab610b3fed38e0622db503b5785d6eec475f82a4ac3830617a83278369bcb2.scope: Deactivated successfully.
Dec  7 04:42:13 np0005549474 systemd[1]: Started libpod-conmon-804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00.scope.
Dec  7 04:42:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe16d077995e27963de38943b4d771f7fdc0378e891a971942f3a1e44f337514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe16d077995e27963de38943b4d771f7fdc0378e891a971942f3a1e44f337514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe16d077995e27963de38943b4d771f7fdc0378e891a971942f3a1e44f337514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe16d077995e27963de38943b4d771f7fdc0378e891a971942f3a1e44f337514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 podman[88741]: 2025-12-07 09:42:13.108878551 +0000 UTC m=+0.240086839 container init 804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:13 np0005549474 podman[88741]: 2025-12-07 09:42:13.117687768 +0000 UTC m=+0.248896036 container start 804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:42:13 np0005549474 podman[88741]: 2025-12-07 09:42:13.128741852 +0000 UTC m=+0.259950220 container attach 804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:13 np0005549474 python3[88799]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]: {
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:    "0": [
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:        {
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "devices": [
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "/dev/loop3"
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            ],
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "lv_name": "ceph_lv0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "lv_size": "21470642176",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "name": "ceph_lv0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "tags": {
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.cluster_name": "ceph",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.crush_device_class": "",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.encrypted": "0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.osd_id": "0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.type": "block",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.vdo": "0",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:                "ceph.with_tpm": "0"
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            },
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "type": "block",
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:            "vg_name": "ceph_vg0"
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:        }
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]:    ]
Dec  7 04:42:13 np0005549474 laughing_proskuriakova[88769]: }
Dec  7 04:42:13 np0005549474 podman[88800]: 2025-12-07 09:42:13.316502226 +0000 UTC m=+0.028278707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:13 np0005549474 systemd[1]: libpod-804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00.scope: Deactivated successfully.
Dec  7 04:42:13 np0005549474 podman[88800]: 2025-12-07 09:42:13.647698632 +0000 UTC m=+0.359475123 container create 5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62 (image=quay.io/ceph/ceph:v19, name=practical_diffie, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:13 np0005549474 podman[88741]: 2025-12-07 09:42:13.692798738 +0000 UTC m=+0.824007066 container died 804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_proskuriakova, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 04:42:13 np0005549474 systemd[1]: Started libpod-conmon-5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62.scope.
Dec  7 04:42:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:13 np0005549474 systemd[1]: var-lib-containers-storage-overlay-fe16d077995e27963de38943b4d771f7fdc0378e891a971942f3a1e44f337514-merged.mount: Deactivated successfully.
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c239c48dd2a6f1d60dd2041bf38068062e6e8d52d388c54569c11439a9efcc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c239c48dd2a6f1d60dd2041bf38068062e6e8d52d388c54569c11439a9efcc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c239c48dd2a6f1d60dd2041bf38068062e6e8d52d388c54569c11439a9efcc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:13 np0005549474 podman[88800]: 2025-12-07 09:42:13.772717457 +0000 UTC m=+0.484493998 container init 5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62 (image=quay.io/ceph/ceph:v19, name=practical_diffie, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:13 np0005549474 podman[88800]: 2025-12-07 09:42:13.780371829 +0000 UTC m=+0.492148300 container start 5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62 (image=quay.io/ceph/ceph:v19, name=practical_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:13 np0005549474 podman[88817]: 2025-12-07 09:42:13.794671461 +0000 UTC m=+0.312844481 container remove 804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_proskuriakova, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:13 np0005549474 systemd[1]: libpod-conmon-804de202048f62c4b5f049be26fa2cc79679f9f5120e2958b573f727d0edeb00.scope: Deactivated successfully.
Dec  7 04:42:13 np0005549474 podman[88800]: 2025-12-07 09:42:13.806519231 +0000 UTC m=+0.518295732 container attach 5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62 (image=quay.io/ceph/ceph:v19, name=practical_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec  7 04:42:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2339250577' entity='client.admin' 
Dec  7 04:42:14 np0005549474 systemd[1]: libpod-5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62.scope: Deactivated successfully.
Dec  7 04:42:14 np0005549474 podman[88800]: 2025-12-07 09:42:14.191486545 +0000 UTC m=+0.903263026 container died 5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62 (image=quay.io/ceph/ceph:v19, name=practical_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-51c239c48dd2a6f1d60dd2041bf38068062e6e8d52d388c54569c11439a9efcc-merged.mount: Deactivated successfully.
Dec  7 04:42:14 np0005549474 podman[88800]: 2025-12-07 09:42:14.232858727 +0000 UTC m=+0.944635198 container remove 5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62 (image=quay.io/ceph/ceph:v19, name=practical_diffie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:14 np0005549474 systemd[1]: libpod-conmon-5aeb9e0ca45479437f68368b4c083d32a2c326746cd27cadc92043c221a56c62.scope: Deactivated successfully.
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.32942388 +0000 UTC m=+0.021383218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.537426407 +0000 UTC m=+0.229385695 container create f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:42:14 np0005549474 python3[88998]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:14 np0005549474 systemd[1]: Started libpod-conmon-f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883.scope.
Dec  7 04:42:14 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.643891129 +0000 UTC m=+0.335850437 container init f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.649470038 +0000 UTC m=+0.341429336 container start f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:42:14 np0005549474 adoring_ramanujan[89002]: 167 167
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.654251423 +0000 UTC m=+0.346210761 container attach f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ramanujan, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:42:14 np0005549474 systemd[1]: libpod-f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883.scope: Deactivated successfully.
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.655024046 +0000 UTC m=+0.346983354 container died f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:42:14 np0005549474 podman[89001]: 2025-12-07 09:42:14.681040264 +0000 UTC m=+0.100145462 container create 6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd (image=quay.io/ceph/ceph:v19, name=competent_payne, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:42:14 np0005549474 podman[89001]: 2025-12-07 09:42:14.610345344 +0000 UTC m=+0.029450562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:14 np0005549474 systemd[1]: Started libpod-conmon-6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd.scope.
Dec  7 04:42:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-56402d62a7715fbe352e47822e2a4c344849e428d0ee26b6f3c7af3980e5473f-merged.mount: Deactivated successfully.
Dec  7 04:42:14 np0005549474 podman[88959]: 2025-12-07 09:42:14.716309982 +0000 UTC m=+0.408269280 container remove f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:14 np0005549474 systemd[1]: libpod-conmon-f035b78f7835b2b25ce6b65bbfdef58c88d452452029a8bd073f13aa8d571883.scope: Deactivated successfully.
Dec  7 04:42:14 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21615685e1a741f9086b0bd2f3e71b67d59bcc802ba83c5d66ed268f112eefa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21615685e1a741f9086b0bd2f3e71b67d59bcc802ba83c5d66ed268f112eefa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21615685e1a741f9086b0bd2f3e71b67d59bcc802ba83c5d66ed268f112eefa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 podman[89001]: 2025-12-07 09:42:14.748845977 +0000 UTC m=+0.167951195 container init 6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd (image=quay.io/ceph/ceph:v19, name=competent_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:14 np0005549474 podman[89001]: 2025-12-07 09:42:14.753618232 +0000 UTC m=+0.172723420 container start 6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd (image=quay.io/ceph/ceph:v19, name=competent_payne, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:14 np0005549474 podman[89001]: 2025-12-07 09:42:14.758834159 +0000 UTC m=+0.177939357 container attach 6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd (image=quay.io/ceph/ceph:v19, name=competent_payne, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:14 np0005549474 podman[89042]: 2025-12-07 09:42:14.850449963 +0000 UTC m=+0.036084093 container create 4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:14 np0005549474 systemd[1]: Started libpod-conmon-4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4.scope.
Dec  7 04:42:14 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276d1788438a2707596079cd8d47a253c90dc43ad111d94270a152bb37c09a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276d1788438a2707596079cd8d47a253c90dc43ad111d94270a152bb37c09a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276d1788438a2707596079cd8d47a253c90dc43ad111d94270a152bb37c09a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276d1788438a2707596079cd8d47a253c90dc43ad111d94270a152bb37c09a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:14 np0005549474 podman[89042]: 2025-12-07 09:42:14.92104439 +0000 UTC m=+0.106678540 container init 4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:14 np0005549474 podman[89042]: 2025-12-07 09:42:14.926443813 +0000 UTC m=+0.112077943 container start 4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:14 np0005549474 podman[89042]: 2025-12-07 09:42:14.834143519 +0000 UTC m=+0.019777669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:14 np0005549474 podman[89042]: 2025-12-07 09:42:14.931830707 +0000 UTC m=+0.117464837 container attach 4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:42:14 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2339250577' entity='client.admin' 
Dec  7 04:42:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v109: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec  7 04:42:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1229875527' entity='client.admin' 
Dec  7 04:42:15 np0005549474 systemd[1]: libpod-6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd.scope: Deactivated successfully.
Dec  7 04:42:15 np0005549474 podman[89001]: 2025-12-07 09:42:15.185529027 +0000 UTC m=+0.604634225 container died 6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd (image=quay.io/ceph/ceph:v19, name=competent_payne, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 04:42:15 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b21615685e1a741f9086b0bd2f3e71b67d59bcc802ba83c5d66ed268f112eefa-merged.mount: Deactivated successfully.
Dec  7 04:42:15 np0005549474 podman[89001]: 2025-12-07 09:42:15.251873275 +0000 UTC m=+0.670978473 container remove 6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd (image=quay.io/ceph/ceph:v19, name=competent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:42:15 np0005549474 systemd[1]: libpod-conmon-6173b052d1ff4bbc82ad6820fb382cf91414910f4283318bba7e53c0c14c3efd.scope: Deactivated successfully.
Dec  7 04:42:15 np0005549474 lvm[89165]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:42:15 np0005549474 lvm[89165]: VG ceph_vg0 finished
Dec  7 04:42:15 np0005549474 ecstatic_shaw[89077]: {}
Dec  7 04:42:15 np0005549474 systemd[1]: libpod-4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4.scope: Deactivated successfully.
Dec  7 04:42:15 np0005549474 podman[89042]: 2025-12-07 09:42:15.615360169 +0000 UTC m=+0.800994299 container died 4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_shaw, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:15 np0005549474 systemd[1]: libpod-4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4.scope: Consumed 1.018s CPU time.
Dec  7 04:42:15 np0005549474 systemd[1]: var-lib-containers-storage-overlay-276d1788438a2707596079cd8d47a253c90dc43ad111d94270a152bb37c09a7d-merged.mount: Deactivated successfully.
Dec  7 04:42:15 np0005549474 podman[89042]: 2025-12-07 09:42:15.667856558 +0000 UTC m=+0.853490688 container remove 4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:15 np0005549474 systemd[1]: libpod-conmon-4a66896ca0ea3c941951fb00cb513278842ec9c7f74efbb17c35e370299399e4.scope: Deactivated successfully.
Dec  7 04:42:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:15 np0005549474 python3[89200]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:16 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec  7 04:42:16 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:16 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 04:42:16 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1229875527' entity='client.admin' 
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:16 np0005549474 python3[89295]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.dotugk/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:16 np0005549474 podman[89321]: 2025-12-07 09:42:16.377293375 +0000 UTC m=+0.088067137 container create 9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009 (image=quay.io/ceph/ceph:v19, name=compassionate_wright, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:16 np0005549474 systemd[1]: Started libpod-conmon-9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009.scope.
Dec  7 04:42:16 np0005549474 podman[89321]: 2025-12-07 09:42:16.317800964 +0000 UTC m=+0.028574746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4495bcf2e71255c4f40930f5613491d47c1cc9f88e05995f75cf19db26fd17db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4495bcf2e71255c4f40930f5613491d47c1cc9f88e05995f75cf19db26fd17db/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4495bcf2e71255c4f40930f5613491d47c1cc9f88e05995f75cf19db26fd17db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:16 np0005549474 podman[89321]: 2025-12-07 09:42:16.446643345 +0000 UTC m=+0.157417167 container init 9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009 (image=quay.io/ceph/ceph:v19, name=compassionate_wright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:42:16 np0005549474 podman[89321]: 2025-12-07 09:42:16.454118831 +0000 UTC m=+0.164892593 container start 9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009 (image=quay.io/ceph/ceph:v19, name=compassionate_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:16 np0005549474 podman[89321]: 2025-12-07 09:42:16.45840853 +0000 UTC m=+0.169182332 container attach 9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009 (image=quay.io/ceph/ceph:v19, name=compassionate_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.505239349 +0000 UTC m=+0.037234128 container create fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a (image=quay.io/ceph/ceph:v19, name=quizzical_dirac, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:42:16 np0005549474 systemd[1]: Started libpod-conmon-fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a.scope.
Dec  7 04:42:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.487394399 +0000 UTC m=+0.019389198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.643795983 +0000 UTC m=+0.175790762 container init fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a (image=quay.io/ceph/ceph:v19, name=quizzical_dirac, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.651162326 +0000 UTC m=+0.183157105 container start fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a (image=quay.io/ceph/ceph:v19, name=quizzical_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:42:16 np0005549474 quizzical_dirac[89375]: 167 167
Dec  7 04:42:16 np0005549474 systemd[1]: libpod-fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a.scope: Deactivated successfully.
Dec  7 04:42:16 np0005549474 conmon[89375]: conmon fe32c29cdae1ea3dfaf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a.scope/container/memory.events
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.655075564 +0000 UTC m=+0.187070343 container attach fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a (image=quay.io/ceph/ceph:v19, name=quizzical_dirac, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.655594021 +0000 UTC m=+0.187588800 container died fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a (image=quay.io/ceph/ceph:v19, name=quizzical_dirac, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-08d6b9cf2f98900ff5fe77fff39046fb7c17c005ac34a15cee07c43cb0d01387-merged.mount: Deactivated successfully.
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.dotugk/server_addr}] v 0)
Dec  7 04:42:16 np0005549474 podman[89359]: 2025-12-07 09:42:16.961273004 +0000 UTC m=+0.493267803 container remove fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a (image=quay.io/ceph/ceph:v19, name=quizzical_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/405494054' entity='client.admin' 
Dec  7 04:42:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v110: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:16 np0005549474 systemd[1]: libpod-9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009.scope: Deactivated successfully.
Dec  7 04:42:16 np0005549474 podman[89321]: 2025-12-07 09:42:16.99847204 +0000 UTC m=+0.709245822 container died 9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009 (image=quay.io/ceph/ceph:v19, name=compassionate_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:17 np0005549474 systemd[1]: libpod-conmon-fe32c29cdae1ea3dfaf99935ef5dbd5be0f76364806fb578bbed77394806938a.scope: Deactivated successfully.
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dotugk (monmap changed)...
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dotugk (monmap changed)...
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:42:17 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4495bcf2e71255c4f40930f5613491d47c1cc9f88e05995f75cf19db26fd17db-merged.mount: Deactivated successfully.
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec  7 04:42:17 np0005549474 podman[89321]: 2025-12-07 09:42:17.06618493 +0000 UTC m=+0.776958692 container remove 9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009 (image=quay.io/ceph/ceph:v19, name=compassionate_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:42:17 np0005549474 systemd[1]: libpod-conmon-9276420eb7609eb88a254ca410e1548bc2cba0a71a18de143b706d2d34dc9009.scope: Deactivated successfully.
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: Reconfiguring mon.compute-0 (monmap changed)...
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/405494054' entity='client.admin' 
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: Reconfiguring mgr.compute-0.dotugk (monmap changed)...
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:17 np0005549474 podman[89491]: 2025-12-07 09:42:17.42722755 +0000 UTC m=+0.041206199 container create e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d (image=quay.io/ceph/ceph:v19, name=crazy_bose, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:42:17 np0005549474 systemd[1]: Started libpod-conmon-e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d.scope.
Dec  7 04:42:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:17 np0005549474 podman[89491]: 2025-12-07 09:42:17.409125132 +0000 UTC m=+0.023103771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:17 np0005549474 podman[89491]: 2025-12-07 09:42:17.518874054 +0000 UTC m=+0.132852673 container init e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d (image=quay.io/ceph/ceph:v19, name=crazy_bose, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:42:17 np0005549474 podman[89491]: 2025-12-07 09:42:17.523682209 +0000 UTC m=+0.137660818 container start e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d (image=quay.io/ceph/ceph:v19, name=crazy_bose, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:42:17 np0005549474 podman[89491]: 2025-12-07 09:42:17.526766033 +0000 UTC m=+0.140744652 container attach e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d (image=quay.io/ceph/ceph:v19, name=crazy_bose, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:42:17 np0005549474 crazy_bose[89507]: 167 167
Dec  7 04:42:17 np0005549474 systemd[1]: libpod-e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d.scope: Deactivated successfully.
Dec  7 04:42:17 np0005549474 podman[89512]: 2025-12-07 09:42:17.574141547 +0000 UTC m=+0.023812562 container died e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d (image=quay.io/ceph/ceph:v19, name=crazy_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:42:17 np0005549474 systemd[1]: var-lib-containers-storage-overlay-cbe9fd674e3f4e361b9131c0456faa801937f0254d5c4f788c5efb860eaf7889-merged.mount: Deactivated successfully.
Dec  7 04:42:17 np0005549474 podman[89512]: 2025-12-07 09:42:17.736620035 +0000 UTC m=+0.186291030 container remove e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d (image=quay.io/ceph/ceph:v19, name=crazy_bose, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:17 np0005549474 systemd[1]: libpod-conmon-e09054af82ceb95346e17d04090ca1b33bdbca4c87adf2b2f22bbcf4853be48d.scope: Deactivated successfully.
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec  7 04:42:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec  7 04:42:17 np0005549474 python3[89552]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.buauyv/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:17 np0005549474 podman[89579]: 2025-12-07 09:42:17.930021061 +0000 UTC m=+0.053966655 container create dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4 (image=quay.io/ceph/ceph:v19, name=sharp_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:17 np0005549474 systemd[1]: Started libpod-conmon-dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4.scope.
Dec  7 04:42:18 np0005549474 podman[89579]: 2025-12-07 09:42:17.908346615 +0000 UTC m=+0.032292269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:18 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:18 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7592c13ec70e2004accdca3b4581dd9ace5ff3572e47fec50b4919ec85e1263/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:18 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7592c13ec70e2004accdca3b4581dd9ace5ff3572e47fec50b4919ec85e1263/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:18 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7592c13ec70e2004accdca3b4581dd9ace5ff3572e47fec50b4919ec85e1263/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:18 np0005549474 podman[89579]: 2025-12-07 09:42:18.022363116 +0000 UTC m=+0.146308730 container init dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4 (image=quay.io/ceph/ceph:v19, name=sharp_elbakyan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:42:18 np0005549474 podman[89579]: 2025-12-07 09:42:18.029284646 +0000 UTC m=+0.153230240 container start dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4 (image=quay.io/ceph/ceph:v19, name=sharp_elbakyan, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:42:18 np0005549474 podman[89579]: 2025-12-07 09:42:18.0317245 +0000 UTC m=+0.155670094 container attach dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4 (image=quay.io/ceph/ceph:v19, name=sharp_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.227661721 +0000 UTC m=+0.037548688 container create b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_easley, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:18 np0005549474 systemd[1]: Started libpod-conmon-b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6.scope.
Dec  7 04:42:18 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.282325296 +0000 UTC m=+0.092212283 container init b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_easley, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.287535874 +0000 UTC m=+0.097422841 container start b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:42:18 np0005549474 ecstatic_easley[89671]: 167 167
Dec  7 04:42:18 np0005549474 systemd[1]: libpod-b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6.scope: Deactivated successfully.
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.29167863 +0000 UTC m=+0.101565617 container attach b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.300002512 +0000 UTC m=+0.109889489 container died b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_easley, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.208402208 +0000 UTC m=+0.018289205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f582395fdebe72231a94b2be343cbef642a7f8fc282e94cd9fd013dc074c8616-merged.mount: Deactivated successfully.
Dec  7 04:42:18 np0005549474 podman[89655]: 2025-12-07 09:42:18.337779885 +0000 UTC m=+0.147666852 container remove b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:42:18 np0005549474 systemd[1]: libpod-conmon-b0913642595405892533075fe00ee90eefa390ad0c6d2a38d584eed2b92354a6.scope: Deactivated successfully.
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.buauyv/server_addr}] v 0)
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:18 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec  7 04:42:18 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:18 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Dec  7 04:42:18 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877033432' entity='client.admin' 
Dec  7 04:42:18 np0005549474 systemd[1]: libpod-dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4.scope: Deactivated successfully.
Dec  7 04:42:18 np0005549474 podman[89579]: 2025-12-07 09:42:18.434139891 +0000 UTC m=+0.558085515 container died dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4 (image=quay.io/ceph/ceph:v19, name=sharp_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:42:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f7592c13ec70e2004accdca3b4581dd9ace5ff3572e47fec50b4919ec85e1263-merged.mount: Deactivated successfully.
Dec  7 04:42:18 np0005549474 podman[89579]: 2025-12-07 09:42:18.473902456 +0000 UTC m=+0.597848050 container remove dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4 (image=quay.io/ceph/ceph:v19, name=sharp_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 04:42:18 np0005549474 systemd[1]: libpod-conmon-dadcd0520ff2a797492ac487750f9b711d00102f7e19f2383a79271d9a7a9ce4.scope: Deactivated successfully.
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: Reconfiguring crash.compute-0 (monmap changed)...
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: Reconfiguring daemon crash.compute-0 on compute-0
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  7 04:42:18 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3877033432' entity='client.admin' 
Dec  7 04:42:18 np0005549474 podman[89771]: 2025-12-07 09:42:18.89588012 +0000 UTC m=+0.054382317 container create 6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 04:42:18 np0005549474 systemd[1]: Started libpod-conmon-6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac.scope.
Dec  7 04:42:18 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:18 np0005549474 podman[89771]: 2025-12-07 09:42:18.875262286 +0000 UTC m=+0.033764503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:42:18 np0005549474 podman[89771]: 2025-12-07 09:42:18.974149079 +0000 UTC m=+0.132651306 container init 6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_rosalind, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:42:18 np0005549474 podman[89771]: 2025-12-07 09:42:18.980805622 +0000 UTC m=+0.139307809 container start 6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:18 np0005549474 festive_rosalind[89787]: 167 167
Dec  7 04:42:18 np0005549474 podman[89771]: 2025-12-07 09:42:18.984089481 +0000 UTC m=+0.142591698 container attach 6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_rosalind, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:42:18 np0005549474 systemd[1]: libpod-6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac.scope: Deactivated successfully.
Dec  7 04:42:18 np0005549474 podman[89771]: 2025-12-07 09:42:18.98540872 +0000 UTC m=+0.143910917 container died 6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 04:42:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:19 np0005549474 systemd[1]: var-lib-containers-storage-overlay-efabc5e9b178c45d664d47df0a467ac294e00c1adf00dd64564f12d5c7260073-merged.mount: Deactivated successfully.
Dec  7 04:42:19 np0005549474 podman[89771]: 2025-12-07 09:42:19.026777123 +0000 UTC m=+0.185279310 container remove 6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:19 np0005549474 systemd[1]: libpod-conmon-6601a0b217c5bdf0614231582a9e06c6deba8cfcfd9467dfe6d9aa9d834dabac.scope: Deactivated successfully.
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:19 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec  7 04:42:19 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:19 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec  7 04:42:19 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec  7 04:42:19 np0005549474 python3[89836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.ntknug/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:19 np0005549474 podman[89837]: 2025-12-07 09:42:19.433277379 +0000 UTC m=+0.051326985 container create c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006 (image=quay.io/ceph/ceph:v19, name=relaxed_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 04:42:19 np0005549474 systemd[1]: Started libpod-conmon-c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006.scope.
Dec  7 04:42:19 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08259a9f9a685ee69aaf71f42c1123e77f7e3558aa0b60b68ef0df15c6ece81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08259a9f9a685ee69aaf71f42c1123e77f7e3558aa0b60b68ef0df15c6ece81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08259a9f9a685ee69aaf71f42c1123e77f7e3558aa0b60b68ef0df15c6ece81/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:19 np0005549474 podman[89837]: 2025-12-07 09:42:19.405989152 +0000 UTC m=+0.024038848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:19 np0005549474 podman[89837]: 2025-12-07 09:42:19.512859628 +0000 UTC m=+0.130909244 container init c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006 (image=quay.io/ceph/ceph:v19, name=relaxed_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:19 np0005549474 podman[89837]: 2025-12-07 09:42:19.521145418 +0000 UTC m=+0.139195024 container start c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006 (image=quay.io/ceph/ceph:v19, name=relaxed_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:19 np0005549474 podman[89837]: 2025-12-07 09:42:19.525023336 +0000 UTC m=+0.143072972 container attach c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006 (image=quay.io/ceph/ceph:v19, name=relaxed_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.ntknug/server_addr}] v 0)
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: Reconfiguring osd.0 (monmap changed)...
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: Reconfiguring daemon osd.0 on compute-0
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/995242702' entity='client.admin' 
Dec  7 04:42:20 np0005549474 systemd[1]: libpod-c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006.scope: Deactivated successfully.
Dec  7 04:42:20 np0005549474 podman[89837]: 2025-12-07 09:42:20.261554153 +0000 UTC m=+0.879603799 container died c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006 (image=quay.io/ceph/ceph:v19, name=relaxed_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:20 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d08259a9f9a685ee69aaf71f42c1123e77f7e3558aa0b60b68ef0df15c6ece81-merged.mount: Deactivated successfully.
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:20 np0005549474 podman[89837]: 2025-12-07 09:42:20.315018821 +0000 UTC m=+0.933068437 container remove c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006 (image=quay.io/ceph/ceph:v19, name=relaxed_lovelace, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:20 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec  7 04:42:20 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:20 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Dec  7 04:42:20 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Dec  7 04:42:20 np0005549474 systemd[1]: libpod-conmon-c1512014de6b6bfa6cb77de8eb3e1b65ec384e5c48b05d5090eecc88f7e60006.scope: Deactivated successfully.
Dec  7 04:42:20 np0005549474 python3[89913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:20 np0005549474 podman[89914]: 2025-12-07 09:42:20.696745627 +0000 UTC m=+0.065469062 container create 27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e (image=quay.io/ceph/ceph:v19, name=admiring_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:42:20 np0005549474 systemd[1]: Started libpod-conmon-27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e.scope.
Dec  7 04:42:20 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2379876333b445382aecbe8d4838e0059ebbd7ba0db04210c1d6f408744b9afe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2379876333b445382aecbe8d4838e0059ebbd7ba0db04210c1d6f408744b9afe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2379876333b445382aecbe8d4838e0059ebbd7ba0db04210c1d6f408744b9afe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:20 np0005549474 podman[89914]: 2025-12-07 09:42:20.664918764 +0000 UTC m=+0.033642219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:20 np0005549474 podman[89914]: 2025-12-07 09:42:20.773682617 +0000 UTC m=+0.142406052 container init 27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e (image=quay.io/ceph/ceph:v19, name=admiring_taussig, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:20 np0005549474 podman[89914]: 2025-12-07 09:42:20.780676398 +0000 UTC m=+0.149399833 container start 27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e (image=quay.io/ceph/ceph:v19, name=admiring_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:20 np0005549474 podman[89914]: 2025-12-07 09:42:20.785161784 +0000 UTC m=+0.153885219 container attach 27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e (image=quay.io/ceph/ceph:v19, name=admiring_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3060147273' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: Reconfiguring crash.compute-1 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: Reconfiguring daemon crash.compute-1 on compute-1
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/995242702' entity='client.admin' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: Reconfiguring osd.1 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: Reconfiguring daemon osd.1 on compute-1
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: Reconfiguring mon.compute-1 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: Reconfiguring daemon mon.compute-1 on compute-1
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3060147273' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec  7 04:42:21 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3060147273' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 04:42:22 np0005549474 admiring_taussig[89929]: module 'dashboard' is already disabled
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:22 np0005549474 systemd[1]: libpod-27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e.scope: Deactivated successfully.
Dec  7 04:42:22 np0005549474 podman[89954]: 2025-12-07 09:42:22.213251156 +0000 UTC m=+0.031759442 container died 27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e (image=quay.io/ceph/ceph:v19, name=admiring_taussig, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2379876333b445382aecbe8d4838e0059ebbd7ba0db04210c1d6f408744b9afe-merged.mount: Deactivated successfully.
Dec  7 04:42:22 np0005549474 podman[89954]: 2025-12-07 09:42:22.253973839 +0000 UTC m=+0.072482085 container remove 27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e (image=quay.io/ceph/ceph:v19, name=admiring_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:22 np0005549474 systemd[1]: libpod-conmon-27019fdb33a03c3393ba0df1a0429a2da2925456363e2a7a9b663fcf90c42d2e.scope: Deactivated successfully.
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:22 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.ntknug (monmap changed)...
Dec  7 04:42:22 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.ntknug (monmap changed)...
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:22 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.ntknug on compute-2
Dec  7 04:42:22 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.ntknug on compute-2
Dec  7 04:42:22 np0005549474 python3[89994]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: Reconfiguring mon.compute-2 (monmap changed)...
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: Reconfiguring daemon mon.compute-2 on compute-2
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3060147273' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:22 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 04:42:22 np0005549474 podman[89995]: 2025-12-07 09:42:22.775650112 +0000 UTC m=+0.063810433 container create eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df (image=quay.io/ceph/ceph:v19, name=focused_hertz, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:22 np0005549474 systemd[1]: Started libpod-conmon-eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df.scope.
Dec  7 04:42:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:22 np0005549474 podman[89995]: 2025-12-07 09:42:22.758498822 +0000 UTC m=+0.046659203 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c0aab6a6d5e187f6df814319c98e233c4528cbbcb90c908bd61c18b6cd717d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c0aab6a6d5e187f6df814319c98e233c4528cbbcb90c908bd61c18b6cd717d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c0aab6a6d5e187f6df814319c98e233c4528cbbcb90c908bd61c18b6cd717d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:22 np0005549474 podman[89995]: 2025-12-07 09:42:22.86673995 +0000 UTC m=+0.154900291 container init eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df (image=quay.io/ceph/ceph:v19, name=focused_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:42:22 np0005549474 podman[89995]: 2025-12-07 09:42:22.873064291 +0000 UTC m=+0.161224642 container start eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df (image=quay.io/ceph/ceph:v19, name=focused_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:22 np0005549474 podman[89995]: 2025-12-07 09:42:22.87635304 +0000 UTC m=+0.164513411 container attach eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df (image=quay.io/ceph/ceph:v19, name=focused_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2991289158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: Reconfiguring mgr.compute-2.ntknug (monmap changed)...
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: Reconfiguring daemon mgr.compute-2.ntknug on compute-2
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:23 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2991289158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 04:42:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2991289158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  1: '-n'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  2: 'mgr.compute-0.dotugk'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  3: '-f'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  4: '--setuser'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  5: 'ceph'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  6: '--setgroup'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  7: 'ceph'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr respawn  exe_path /proc/self/exe
Dec  7 04:42:24 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:24 np0005549474 systemd[1]: libpod-eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 podman[89995]: 2025-12-07 09:42:24.36151279 +0000 UTC m=+1.649673121 container died eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df (image=quay.io/ceph/ceph:v19, name=focused_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:24 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b6c0aab6a6d5e187f6df814319c98e233c4528cbbcb90c908bd61c18b6cd717d-merged.mount: Deactivated successfully.
Dec  7 04:42:24 np0005549474 podman[89995]: 2025-12-07 09:42:24.398909573 +0000 UTC m=+1.687069904 container remove eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df (image=quay.io/ceph/ceph:v19, name=focused_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 04:42:24 np0005549474 systemd[1]: libpod-conmon-eed96beda40f40bfe86d31f9b5e2b2a9119623f15b7ce8f303ea506e7d1b17df.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-33.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-33.scope: Consumed 28.236s CPU time.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 33 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd[1]: session-24.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 24 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 33.
Dec  7 04:42:24 np0005549474 systemd[1]: session-27.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 27 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd[1]: session-21.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 24.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 21 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd[1]: session-29.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-30.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-32.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-31.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-23.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-26.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-28.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd[1]: session-25.scope: Deactivated successfully.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 27.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 25 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 28 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 23 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 32 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 30 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 29 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 31 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Session 26 logged out. Waiting for processes to exit.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 21.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 29.
Dec  7 04:42:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setuser ceph since I am not root
Dec  7 04:42:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setgroup ceph since I am not root
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 30.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 32.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 31.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 23.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 26.
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 28.
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: pidfile_write: ignore empty --pid-file
Dec  7 04:42:24 np0005549474 systemd-logind[796]: Removed session 25.
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'alerts'
Dec  7 04:42:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:24.605+0000 7fee919c2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'balancer'
Dec  7 04:42:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:24.689+0000 7fee919c2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:42:24 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'cephadm'
Dec  7 04:42:24 np0005549474 python3[90091]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:24 np0005549474 podman[90092]: 2025-12-07 09:42:24.874637804 +0000 UTC m=+0.044039554 container create c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467 (image=quay.io/ceph/ceph:v19, name=frosty_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:24 np0005549474 systemd[1]: Started libpod-conmon-c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467.scope.
Dec  7 04:42:24 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2c99cd22520ac54412961b8f2aaff642b0c34366f4666b48ac8406ecd1b6e3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2c99cd22520ac54412961b8f2aaff642b0c34366f4666b48ac8406ecd1b6e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2c99cd22520ac54412961b8f2aaff642b0c34366f4666b48ac8406ecd1b6e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:24 np0005549474 podman[90092]: 2025-12-07 09:42:24.857393382 +0000 UTC m=+0.026795142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:24 np0005549474 podman[90092]: 2025-12-07 09:42:24.952773099 +0000 UTC m=+0.122174869 container init c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467 (image=quay.io/ceph/ceph:v19, name=frosty_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:42:24 np0005549474 podman[90092]: 2025-12-07 09:42:24.958448001 +0000 UTC m=+0.127849751 container start c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467 (image=quay.io/ceph/ceph:v19, name=frosty_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:42:24 np0005549474 podman[90092]: 2025-12-07 09:42:24.961099941 +0000 UTC m=+0.130501721 container attach c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467 (image=quay.io/ceph/ceph:v19, name=frosty_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:25 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2991289158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 04:42:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'crash'
Dec  7 04:42:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:25.493+0000 7fee919c2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:42:25 np0005549474 ceph-mgr[74811]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:42:25 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'dashboard'
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'devicehealth'
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:26.154+0000 7fee919c2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  from numpy import show_config as show_numpy_config
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:26.322+0000 7fee919c2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'influx'
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:26.393+0000 7fee919c2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'insights'
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'iostat'
Dec  7 04:42:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:26.532+0000 7fee919c2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'k8sevents'
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'localpool'
Dec  7 04:42:26 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mirroring'
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'nfs'
Dec  7 04:42:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:27.560+0000 7fee919c2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'orchestrator'
Dec  7 04:42:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:27.771+0000 7fee919c2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 04:42:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:27.841+0000 7fee919c2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_support'
Dec  7 04:42:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:27.904+0000 7fee919c2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 04:42:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:27.977+0000 7fee919c2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:42:27 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'progress'
Dec  7 04:42:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:28.042+0000 7fee919c2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'prometheus'
Dec  7 04:42:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:28.356+0000 7fee919c2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rbd_support'
Dec  7 04:42:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:28.448+0000 7fee919c2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'restful'
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rgw'
Dec  7 04:42:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:28.876+0000 7fee919c2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:42:28 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rook'
Dec  7 04:42:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:29.434+0000 7fee919c2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'selftest'
Dec  7 04:42:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:29.503+0000 7fee919c2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'snap_schedule'
Dec  7 04:42:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:29.588+0000 7fee919c2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'stats'
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'status'
Dec  7 04:42:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:29.729+0000 7fee919c2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telegraf'
Dec  7 04:42:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:29.802+0000 7fee919c2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telemetry'
Dec  7 04:42:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:29.957+0000 7fee919c2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:42:29 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 04:42:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:30.171+0000 7fee919c2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'volumes'
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv restarted
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv started
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug restarted
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug started
Dec  7 04:42:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:30.430+0000 7fee919c2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'zabbix'
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.dotugk(active, since 3m), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:30.538+0000 7fee919c2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dotugk restarted
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dotugk
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: ms_deliver_dispatch: unhandled message 0x55d937809860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map Activating!
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map I am now activating
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.dotugk(active, starting, since 0.03578s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: balancer
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.dotugk is now available
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [balancer INFO root] Starting
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:42:30
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: cephadm
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: crash
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: dashboard
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: devicehealth
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Starting
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO sso] Loading SSO DB version=1
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: iostat
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: nfs
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: orchestrator
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: pg_autoscaler
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: progress
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] recovery thread starting
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] starting setup
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: rbd_support
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: restful
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [progress INFO root] Loading...
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fee14d80b20>, <progress.module.GhostEvent object at 0x7fee14d80af0>, <progress.module.GhostEvent object at 0x7fee14d80ac0>, <progress.module.GhostEvent object at 0x7fee14d80b80>, <progress.module.GhostEvent object at 0x7fee14d80bb0>, <progress.module.GhostEvent object at 0x7fee14d80be0>, <progress.module.GhostEvent object at 0x7fee14d80c10>, <progress.module.GhostEvent object at 0x7fee14d80c40>, <progress.module.GhostEvent object at 0x7fee14d80c70>] historic events
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: status
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [restful WARNING root] server not running: no certificate configured
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: telemetry
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] PerfHandler: starting
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: volumes
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TaskHandler: starting
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"} v 0)
Dec  7 04:42:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] setup complete
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  7 04:42:30 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  7 04:42:31 np0005549474 systemd-logind[796]: New session 34 of user ceph-admin.
Dec  7 04:42:31 np0005549474 systemd[1]: Started Session 34 of User ceph-admin.
Dec  7 04:42:31 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.module] Engine started.
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: Active manager daemon compute-0.dotugk restarted
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: Activating manager daemon compute-0.dotugk
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: Manager daemon compute-0.dotugk is now available
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.dotugk(active, since 1.06172s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:31 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v3: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec  7 04:42:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:31 np0005549474 frosty_tu[90108]: Option GRAFANA_API_USERNAME updated
Dec  7 04:42:31 np0005549474 systemd[1]: libpod-c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467.scope: Deactivated successfully.
Dec  7 04:42:31 np0005549474 podman[90092]: 2025-12-07 09:42:31.658297856 +0000 UTC m=+6.827699656 container died c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467 (image=quay.io/ceph/ceph:v19, name=frosty_tu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 04:42:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4f2c99cd22520ac54412961b8f2aaff642b0c34366f4666b48ac8406ecd1b6e3-merged.mount: Deactivated successfully.
Dec  7 04:42:31 np0005549474 podman[90092]: 2025-12-07 09:42:31.697812372 +0000 UTC m=+6.867214142 container remove c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467 (image=quay.io/ceph/ceph:v19, name=frosty_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:42:31 np0005549474 systemd[1]: libpod-conmon-c0442b33a6766863a947fd7f173d23818b045250f6fa449e337daf0aed616467.scope: Deactivated successfully.
Dec  7 04:42:31 np0005549474 podman[90405]: 2025-12-07 09:42:31.912502462 +0000 UTC m=+0.085185491 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 04:42:32 np0005549474 python3[90443]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec  7 04:42:32 np0005549474 podman[90405]: 2025-12-07 09:42:32.023242554 +0000 UTC m=+0.195925603 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.083408555 +0000 UTC m=+0.059846903 container create 7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a (image=quay.io/ceph/ceph:v19, name=elated_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:42:32 np0005549474 systemd[1]: Started libpod-conmon-7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a.scope.
Dec  7 04:42:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.047316643 +0000 UTC m=+0.023755051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03086417b6e54bb8a2862cb126fe0e0ab42871e1d8e2cb8e1574a7bee8909ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03086417b6e54bb8a2862cb126fe0e0ab42871e1d8e2cb8e1574a7bee8909ae/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b03086417b6e54bb8a2862cb126fe0e0ab42871e1d8e2cb8e1574a7bee8909ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.152982812 +0000 UTC m=+0.129421180 container init 7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a (image=quay.io/ceph/ceph:v19, name=elated_poincare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.160760996 +0000 UTC m=+0.137199354 container start 7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a (image=quay.io/ceph/ceph:v19, name=elated_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.164785929 +0000 UTC m=+0.141224277 container attach 7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a (image=quay.io/ceph/ceph:v19, name=elated_poincare, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:32] ENGINE Bus STARTING
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:32] ENGINE Bus STARTING
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:32] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:32] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24164 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:32] ENGINE Client ('192.168.122.100', 41934) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:32] ENGINE Client ('192.168.122.100', 41934) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 elated_poincare[90493]: Option GRAFANA_API_PASSWORD updated
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v4: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:32 np0005549474 systemd[1]: libpod-7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a.scope: Deactivated successfully.
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.60678376 +0000 UTC m=+0.583222118 container died 7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a (image=quay.io/ceph/ceph:v19, name=elated_poincare, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:32] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:32] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:32] ENGINE Bus STARTED
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:32] ENGINE Bus STARTED
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b03086417b6e54bb8a2862cb126fe0e0ab42871e1d8e2cb8e1574a7bee8909ae-merged.mount: Deactivated successfully.
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:32] ENGINE Bus STARTING
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:32 np0005549474 podman[90451]: 2025-12-07 09:42:32.733169836 +0000 UTC m=+0.709608184 container remove 7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a (image=quay.io/ceph/ceph:v19, name=elated_poincare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:42:32 np0005549474 systemd[1]: libpod-conmon-7f53d0d6be5c0ea4e06b40c0b407c59b98c4461c343e6d6eaa28af17ede6670a.scope: Deactivated successfully.
Dec  7 04:42:32 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 04:42:33 np0005549474 python3[90695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.124999238 +0000 UTC m=+0.033813625 container create 471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb (image=quay.io/ceph/ceph:v19, name=inspiring_wing, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:33 np0005549474 systemd[1]: Started libpod-conmon-471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb.scope.
Dec  7 04:42:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a650daf1ef7f6676fc5a4495a63e3425d6b138a4cec590bb3e9b1aab982c79e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a650daf1ef7f6676fc5a4495a63e3425d6b138a4cec590bb3e9b1aab982c79e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a650daf1ef7f6676fc5a4495a63e3425d6b138a4cec590bb3e9b1aab982c79e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.187730776 +0000 UTC m=+0.096545203 container init 471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb (image=quay.io/ceph/ceph:v19, name=inspiring_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.19480647 +0000 UTC m=+0.103620867 container start 471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb (image=quay.io/ceph/ceph:v19, name=inspiring_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.198725319 +0000 UTC m=+0.107539746 container attach 471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb (image=quay.io/ceph/ceph:v19, name=inspiring_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.110158378 +0000 UTC m=+0.018972775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.dotugk(active, since 2s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:33 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 inspiring_wing[90773]: Option ALERTMANAGER_API_HOST updated
Dec  7 04:42:33 np0005549474 systemd[1]: libpod-471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb.scope: Deactivated successfully.
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.581140846 +0000 UTC m=+0.489955253 container died 471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb (image=quay.io/ceph/ceph:v19, name=inspiring_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a650daf1ef7f6676fc5a4495a63e3425d6b138a4cec590bb3e9b1aab982c79e6-merged.mount: Deactivated successfully.
Dec  7 04:42:33 np0005549474 podman[90733]: 2025-12-07 09:42:33.61760715 +0000 UTC m=+0.526421547 container remove 471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb (image=quay.io/ceph/ceph:v19, name=inspiring_wing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:33 np0005549474 systemd[1]: libpod-conmon-471266b98e2479e95d87ae5a44a3b2bd6a2a4b490aa46ac15f6b20a6eba0bdfb.scope: Deactivated successfully.
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:32] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:32] ENGINE Client ('192.168.122.100', 41934) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:32] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:32] ENGINE Bus STARTED
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:33 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:33 np0005549474 python3[90851]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:34.011738581 +0000 UTC m=+0.042722344 container create a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d (image=quay.io/ceph/ceph:v19, name=fervent_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:34 np0005549474 systemd[1]: Started libpod-conmon-a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d.scope.
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255af145d5db39830de595618bcc84cb34393654eb52ce007c67b367939e7653/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255af145d5db39830de595618bcc84cb34393654eb52ce007c67b367939e7653/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255af145d5db39830de595618bcc84cb34393654eb52ce007c67b367939e7653/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:33.994771198 +0000 UTC m=+0.025754971 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:34.096376453 +0000 UTC m=+0.127360206 container init a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d (image=quay.io/ceph/ceph:v19, name=fervent_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:34.101923571 +0000 UTC m=+0.132907364 container start a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d (image=quay.io/ceph/ceph:v19, name=fervent_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:34.105578703 +0000 UTC m=+0.136562486 container attach a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d (image=quay.io/ceph/ceph:v19, name=fervent_hawking, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14415 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec  7 04:42:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:34 np0005549474 fervent_hawking[90867]: Option PROMETHEUS_API_HOST updated
Dec  7 04:42:34 np0005549474 systemd[1]: libpod-a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d.scope: Deactivated successfully.
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:34.503317663 +0000 UTC m=+0.534301426 container died a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d (image=quay.io/ceph/ceph:v19, name=fervent_hawking, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 04:42:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-255af145d5db39830de595618bcc84cb34393654eb52ce007c67b367939e7653-merged.mount: Deactivated successfully.
Dec  7 04:42:34 np0005549474 podman[90852]: 2025-12-07 09:42:34.546093958 +0000 UTC m=+0.577077711 container remove a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d (image=quay.io/ceph/ceph:v19, name=fervent_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:42:34 np0005549474 systemd[1]: libpod-conmon-a75b6ec15007c9677d5acf22a295d78c641acbd396c363eae8ea96a7c77f017d.scope: Deactivated successfully.
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v5: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:34 np0005549474 python3[91176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:34 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:34 np0005549474 podman[91233]: 2025-12-07 09:42:34.865438456 +0000 UTC m=+0.036148336 container create 987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b (image=quay.io/ceph/ceph:v19, name=determined_chaum, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:34 np0005549474 systemd[1]: Started libpod-conmon-987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b.scope.
Dec  7 04:42:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01649a81fe3aae348d22940c7847e086a8d1cee02bd770f317ff30f3b40ebbe3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01649a81fe3aae348d22940c7847e086a8d1cee02bd770f317ff30f3b40ebbe3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01649a81fe3aae348d22940c7847e086a8d1cee02bd770f317ff30f3b40ebbe3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:34 np0005549474 podman[91233]: 2025-12-07 09:42:34.92210791 +0000 UTC m=+0.092817800 container init 987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b (image=quay.io/ceph/ceph:v19, name=determined_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:42:34 np0005549474 podman[91233]: 2025-12-07 09:42:34.9280796 +0000 UTC m=+0.098789480 container start 987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b (image=quay.io/ceph/ceph:v19, name=determined_chaum, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:34 np0005549474 podman[91233]: 2025-12-07 09:42:34.93145406 +0000 UTC m=+0.102163950 container attach 987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b (image=quay.io/ceph/ceph:v19, name=determined_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:42:34 np0005549474 podman[91233]: 2025-12-07 09:42:34.8503941 +0000 UTC m=+0.021104000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:35 np0005549474 determined_chaum[91290]: Option GRAFANA_API_URL updated
Dec  7 04:42:35 np0005549474 systemd[1]: libpod-987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b.scope: Deactivated successfully.
Dec  7 04:42:35 np0005549474 podman[91233]: 2025-12-07 09:42:35.310134311 +0000 UTC m=+0.480844191 container died 987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b (image=quay.io/ceph/ceph:v19, name=determined_chaum, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:42:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-01649a81fe3aae348d22940c7847e086a8d1cee02bd770f317ff30f3b40ebbe3-merged.mount: Deactivated successfully.
Dec  7 04:42:35 np0005549474 podman[91233]: 2025-12-07 09:42:35.347724985 +0000 UTC m=+0.518434865 container remove 987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b (image=quay.io/ceph/ceph:v19, name=determined_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:35 np0005549474 systemd[1]: libpod-conmon-987bf6563135cdd9cca4584dc76b7c6bed96e2e6e06ebeef4888e12b463e144b.scope: Deactivated successfully.
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.dotugk(active, since 4s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:35 np0005549474 python3[91610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 podman[91675]: 2025-12-07 09:42:35.744738755 +0000 UTC m=+0.060996981 container create ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52 (image=quay.io/ceph/ceph:v19, name=exciting_bell, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:42:35 np0005549474 systemd[1]: Started libpod-conmon-ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52.scope.
Dec  7 04:42:35 np0005549474 podman[91675]: 2025-12-07 09:42:35.715634617 +0000 UTC m=+0.031892893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9bad8c66006fa3fe573ff78247de2366ae653ccaa77a1bfd39a459b0349c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9bad8c66006fa3fe573ff78247de2366ae653ccaa77a1bfd39a459b0349c7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9bad8c66006fa3fe573ff78247de2366ae653ccaa77a1bfd39a459b0349c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:35 np0005549474 podman[91675]: 2025-12-07 09:42:35.831833663 +0000 UTC m=+0.148091989 container init ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52 (image=quay.io/ceph/ceph:v19, name=exciting_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:35 np0005549474 podman[91675]: 2025-12-07 09:42:35.839304832 +0000 UTC m=+0.155563058 container start ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52 (image=quay.io/ceph/ceph:v19, name=exciting_bell, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:42:35 np0005549474 podman[91675]: 2025-12-07 09:42:35.842866278 +0000 UTC m=+0.159124504 container attach ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52 (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2509102800' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2509102800' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2509102800' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  1: '-n'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  2: 'mgr.compute-0.dotugk'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  3: '-f'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  4: '--setuser'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  5: 'ceph'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  6: '--setgroup'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  7: 'ceph'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 04:42:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.dotugk(active, since 6s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:36 np0005549474 systemd[1]: libpod-ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52.scope: Deactivated successfully.
Dec  7 04:42:36 np0005549474 podman[91675]: 2025-12-07 09:42:36.560080036 +0000 UTC m=+0.876338262 container died ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52 (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:42:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-83d9bad8c66006fa3fe573ff78247de2366ae653ccaa77a1bfd39a459b0349c7-merged.mount: Deactivated successfully.
Dec  7 04:42:36 np0005549474 podman[91675]: 2025-12-07 09:42:36.593345905 +0000 UTC m=+0.909604131 container remove ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52 (image=quay.io/ceph/ceph:v19, name=exciting_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:42:36 np0005549474 systemd[1]: libpod-conmon-ed782bbc224a02e9e629c6276598b9af50bc0ea658c41e39807fd51b54008f52.scope: Deactivated successfully.
Dec  7 04:42:36 np0005549474 systemd[1]: session-34.scope: Deactivated successfully.
Dec  7 04:42:36 np0005549474 systemd[1]: session-34.scope: Consumed 4.393s CPU time.
Dec  7 04:42:36 np0005549474 systemd-logind[796]: Session 34 logged out. Waiting for processes to exit.
Dec  7 04:42:36 np0005549474 systemd-logind[796]: Removed session 34.
Dec  7 04:42:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setuser ceph since I am not root
Dec  7 04:42:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setgroup ceph since I am not root
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: pidfile_write: ignore empty --pid-file
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'alerts'
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'balancer'
Dec  7 04:42:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:36.792+0000 7fc9535b0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:42:36 np0005549474 python3[91995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:42:36 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'cephadm'
Dec  7 04:42:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:36.867+0000 7fc9535b0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:42:36 np0005549474 podman[91996]: 2025-12-07 09:42:36.92284824 +0000 UTC m=+0.043333599 container create 31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd (image=quay.io/ceph/ceph:v19, name=gracious_kowalevski, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:42:36 np0005549474 systemd[1]: Started libpod-conmon-31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd.scope.
Dec  7 04:42:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:36 np0005549474 podman[91996]: 2025-12-07 09:42:36.90110743 +0000 UTC m=+0.021592779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b65cf1f4656822e5af11c98f3c2fc5905eef58c4b4dfe9f6e3103a4b23f00f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b65cf1f4656822e5af11c98f3c2fc5905eef58c4b4dfe9f6e3103a4b23f00f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b65cf1f4656822e5af11c98f3c2fc5905eef58c4b4dfe9f6e3103a4b23f00f3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:37 np0005549474 podman[91996]: 2025-12-07 09:42:37.014192072 +0000 UTC m=+0.134677411 container init 31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd (image=quay.io/ceph/ceph:v19, name=gracious_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:42:37 np0005549474 podman[91996]: 2025-12-07 09:42:37.019390321 +0000 UTC m=+0.139875660 container start 31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd (image=quay.io/ceph/ceph:v19, name=gracious_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 04:42:37 np0005549474 podman[91996]: 2025-12-07 09:42:37.022847713 +0000 UTC m=+0.143333032 container attach 31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd (image=quay.io/ceph/ceph:v19, name=gracious_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/2509102800' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  7 04:42:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3673904121' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 04:42:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'crash'
Dec  7 04:42:37 np0005549474 ceph-mgr[74811]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:42:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'dashboard'
Dec  7 04:42:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:37.695+0000 7fc9535b0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'devicehealth'
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:38.324+0000 7fc9535b0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3673904121' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 04:42:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3673904121' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 04:42:38 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.dotugk(active, since 7s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:38 np0005549474 systemd[1]: libpod-31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd.scope: Deactivated successfully.
Dec  7 04:42:38 np0005549474 podman[91996]: 2025-12-07 09:42:38.435914517 +0000 UTC m=+1.556399846 container died 31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd (image=quay.io/ceph/ceph:v19, name=gracious_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:42:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3b65cf1f4656822e5af11c98f3c2fc5905eef58c4b4dfe9f6e3103a4b23f00f3-merged.mount: Deactivated successfully.
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  from numpy import show_config as show_numpy_config
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:38.488+0000 7fc9535b0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'influx'
Dec  7 04:42:38 np0005549474 podman[91996]: 2025-12-07 09:42:38.491291257 +0000 UTC m=+1.611776576 container remove 31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd (image=quay.io/ceph/ceph:v19, name=gracious_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:38 np0005549474 systemd[1]: libpod-conmon-31b282843427fe960d701a3f0e16c1bd3dc9509ead676e42f6deb5616818d5fd.scope: Deactivated successfully.
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:38.562+0000 7fc9535b0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'insights'
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'iostat'
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:42:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'k8sevents'
Dec  7 04:42:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:38.705+0000 7fc9535b0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'localpool'
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 04:42:39 np0005549474 python3[92136]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mirroring'
Dec  7 04:42:39 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3673904121' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'nfs'
Dec  7 04:42:39 np0005549474 python3[92207]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100559.0283058-37342-125667828521228/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:42:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:39.672+0000 7fc9535b0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'orchestrator'
Dec  7 04:42:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:39.886+0000 7fc9535b0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 04:42:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:39.962+0000 7fc9535b0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:42:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_support'
Dec  7 04:42:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:40.032+0000 7fc9535b0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 04:42:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:40.113+0000 7fc9535b0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'progress'
Dec  7 04:42:40 np0005549474 python3[92257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:40.183+0000 7fc9535b0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'prometheus'
Dec  7 04:42:40 np0005549474 podman[92258]: 2025-12-07 09:42:40.216246407 +0000 UTC m=+0.039737642 container create 049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4 (image=quay.io/ceph/ceph:v19, name=mystifying_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:40 np0005549474 systemd[1]: Started libpod-conmon-049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4.scope.
Dec  7 04:42:40 np0005549474 podman[92258]: 2025-12-07 09:42:40.197904997 +0000 UTC m=+0.021396282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbf96399e133f893c95fb98d534bb3cac80d4b6de49dc76ead2947bbe25562d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbf96399e133f893c95fb98d534bb3cac80d4b6de49dc76ead2947bbe25562d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbf96399e133f893c95fb98d534bb3cac80d4b6de49dc76ead2947bbe25562d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:40 np0005549474 podman[92258]: 2025-12-07 09:42:40.326564895 +0000 UTC m=+0.150056220 container init 049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4 (image=quay.io/ceph/ceph:v19, name=mystifying_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:40 np0005549474 podman[92258]: 2025-12-07 09:42:40.338372572 +0000 UTC m=+0.161863847 container start 049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4 (image=quay.io/ceph/ceph:v19, name=mystifying_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:40 np0005549474 podman[92258]: 2025-12-07 09:42:40.342560593 +0000 UTC m=+0.166051918 container attach 049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4 (image=quay.io/ceph/ceph:v19, name=mystifying_lamport, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:40.520+0000 7fc9535b0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rbd_support'
Dec  7 04:42:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:40.612+0000 7fc9535b0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'restful'
Dec  7 04:42:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rgw'
Dec  7 04:42:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:41.025+0000 7fc9535b0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rook'
Dec  7 04:42:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:41.592+0000 7fc9535b0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'selftest'
Dec  7 04:42:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:41.664+0000 7fc9535b0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'snap_schedule'
Dec  7 04:42:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:41.744+0000 7fc9535b0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'stats'
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'status'
Dec  7 04:42:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:41.893+0000 7fc9535b0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telegraf'
Dec  7 04:42:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:41.960+0000 7fc9535b0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:42:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telemetry'
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:42.124+0000 7fc9535b0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:42.338+0000 7fc9535b0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'volumes'
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug restarted
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug started
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:42.600+0000 7fc9535b0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'zabbix'
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.dotugk(active, since 12s), standbys: compute-1.buauyv, compute-2.ntknug
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv restarted
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv started
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:42.673+0000 7fc9535b0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dotugk restarted
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dotugk
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: ms_deliver_dispatch: unhandled message 0x55ec6e441860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  1: '-n'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  2: 'mgr.compute-0.dotugk'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  3: '-f'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  4: '--setuser'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  5: 'ceph'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  6: '--setgroup'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  7: 'ceph'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr respawn  exe_path /proc/self/exe
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  7 04:42:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.dotugk(active, starting, since 0.0286199s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setuser ceph since I am not root
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setgroup ceph since I am not root
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: pidfile_write: ignore empty --pid-file
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'alerts'
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:42.918+0000 7fae15205140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'balancer'
Dec  7 04:42:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:42.998+0000 7fae15205140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:42:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'cephadm'
Dec  7 04:42:43 np0005549474 ceph-mon[74516]: Active manager daemon compute-0.dotugk restarted
Dec  7 04:42:43 np0005549474 ceph-mon[74516]: Activating manager daemon compute-0.dotugk
Dec  7 04:42:43 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'crash'
Dec  7 04:42:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:43.785+0000 7fae15205140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:42:43 np0005549474 ceph-mgr[74811]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:42:43 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'dashboard'
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'devicehealth'
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:44.426+0000 7fae15205140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  from numpy import show_config as show_numpy_config
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:44.590+0000 7fae15205140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'influx'
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:44.660+0000 7fae15205140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'insights'
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'iostat'
Dec  7 04:42:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:44.793+0000 7fae15205140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:42:44 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'k8sevents'
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'localpool'
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mirroring'
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'nfs'
Dec  7 04:42:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:45.768+0000 7fae15205140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'orchestrator'
Dec  7 04:42:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:45.994+0000 7fae15205140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:45 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 04:42:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:46.067+0000 7fae15205140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_support'
Dec  7 04:42:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:46.139+0000 7fae15205140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 04:42:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:46.216+0000 7fae15205140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'progress'
Dec  7 04:42:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:46.288+0000 7fae15205140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'prometheus'
Dec  7 04:42:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:46.613+0000 7fae15205140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rbd_support'
Dec  7 04:42:46 np0005549474 systemd[1]: Stopping User Manager for UID 42477...
Dec  7 04:42:46 np0005549474 systemd[75863]: Activating special unit Exit the Session...
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped target Main User Target.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped target Basic System.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped target Paths.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped target Sockets.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped target Timers.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  7 04:42:46 np0005549474 systemd[75863]: Closed D-Bus User Message Bus Socket.
Dec  7 04:42:46 np0005549474 systemd[75863]: Stopped Create User's Volatile Files and Directories.
Dec  7 04:42:46 np0005549474 systemd[75863]: Removed slice User Application Slice.
Dec  7 04:42:46 np0005549474 systemd[75863]: Reached target Shutdown.
Dec  7 04:42:46 np0005549474 systemd[75863]: Finished Exit the Session.
Dec  7 04:42:46 np0005549474 systemd[75863]: Reached target Exit the Session.
Dec  7 04:42:46 np0005549474 systemd[1]: user@42477.service: Deactivated successfully.
Dec  7 04:42:46 np0005549474 systemd[1]: Stopped User Manager for UID 42477.
Dec  7 04:42:46 np0005549474 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  7 04:42:46 np0005549474 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  7 04:42:46 np0005549474 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  7 04:42:46 np0005549474 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  7 04:42:46 np0005549474 systemd[1]: Removed slice User Slice of UID 42477.
Dec  7 04:42:46 np0005549474 systemd[1]: user-42477.slice: Consumed 34.316s CPU time.
Dec  7 04:42:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:46.722+0000 7fae15205140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'restful'
Dec  7 04:42:46 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rgw'
Dec  7 04:42:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:47.153+0000 7fae15205140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rook'
Dec  7 04:42:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:47.737+0000 7fae15205140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'selftest'
Dec  7 04:42:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:47.813+0000 7fae15205140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'snap_schedule'
Dec  7 04:42:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:47.894+0000 7fae15205140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'stats'
Dec  7 04:42:47 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'status'
Dec  7 04:42:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:48.039+0000 7fae15205140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telegraf'
Dec  7 04:42:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:48.108+0000 7fae15205140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telemetry'
Dec  7 04:42:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:48.262+0000 7fae15205140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 04:42:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:48.469+0000 7fae15205140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'volumes'
Dec  7 04:42:48 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug restarted
Dec  7 04:42:48 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug started
Dec  7 04:42:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:48.718+0000 7fae15205140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'zabbix'
Dec  7 04:42:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:42:48.786+0000 7fae15205140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:42:48 np0005549474 ceph-mgr[74811]: ms_deliver_dispatch: unhandled message 0x55e8ee0d9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv restarted
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv started
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dotugk restarted
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dotugk
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.dotugk(active, starting, since 6s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map Activating!
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map I am now activating
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.dotugk(active, starting, since 0.0970144s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: balancer
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.dotugk is now available
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] Starting
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:42:49
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: cephadm
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: crash
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: dashboard
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO sso] Loading SSO DB version=1
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: devicehealth
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: iostat
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Starting
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: nfs
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: orchestrator
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: pg_autoscaler
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: progress
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [progress INFO root] Loading...
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fad9454fac0>, <progress.module.GhostEvent object at 0x7fad9454fa00>, <progress.module.GhostEvent object at 0x7fad9454fa60>, <progress.module.GhostEvent object at 0x7fad94567340>, <progress.module.GhostEvent object at 0x7fad94567370>, <progress.module.GhostEvent object at 0x7fad945673a0>, <progress.module.GhostEvent object at 0x7fad945673d0>, <progress.module.GhostEvent object at 0x7fad94567400>, <progress.module.GhostEvent object at 0x7fad94567430>] historic events
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] recovery thread starting
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] starting setup
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: rbd_support
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: restful
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: status
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [restful WARNING root] server not running: no certificate configured
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: telemetry
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] PerfHandler: starting
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: volumes
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TaskHandler: starting
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"} v 0)
Dec  7 04:42:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] setup complete
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  7 04:42:49 np0005549474 systemd[1]: Created slice User Slice of UID 42477.
Dec  7 04:42:49 np0005549474 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  7 04:42:49 np0005549474 systemd-logind[796]: New session 35 of user ceph-admin.
Dec  7 04:42:49 np0005549474 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  7 04:42:49 np0005549474 systemd[1]: Starting User Manager for UID 42477...
Dec  7 04:42:49 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.module] Engine started.
Dec  7 04:42:50 np0005549474 systemd[92462]: Queued start job for default target Main User Target.
Dec  7 04:42:50 np0005549474 systemd[92462]: Created slice User Application Slice.
Dec  7 04:42:50 np0005549474 systemd[92462]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 04:42:50 np0005549474 systemd[92462]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 04:42:50 np0005549474 systemd[92462]: Reached target Paths.
Dec  7 04:42:50 np0005549474 systemd[92462]: Reached target Timers.
Dec  7 04:42:50 np0005549474 systemd[92462]: Starting D-Bus User Message Bus Socket...
Dec  7 04:42:50 np0005549474 systemd[92462]: Starting Create User's Volatile Files and Directories...
Dec  7 04:42:50 np0005549474 systemd[92462]: Listening on D-Bus User Message Bus Socket.
Dec  7 04:42:50 np0005549474 systemd[92462]: Reached target Sockets.
Dec  7 04:42:50 np0005549474 systemd[92462]: Finished Create User's Volatile Files and Directories.
Dec  7 04:42:50 np0005549474 systemd[92462]: Reached target Basic System.
Dec  7 04:42:50 np0005549474 systemd[92462]: Reached target Main User Target.
Dec  7 04:42:50 np0005549474 systemd[92462]: Startup finished in 109ms.
Dec  7 04:42:50 np0005549474 systemd[1]: Started User Manager for UID 42477.
Dec  7 04:42:50 np0005549474 systemd[1]: Started Session 35 of User ceph-admin.
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:50] ENGINE Bus STARTING
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:50] ENGINE Bus STARTING
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: Active manager daemon compute-0.dotugk restarted
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: Activating manager daemon compute-0.dotugk
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: Manager daemon compute-0.dotugk is now available
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:42:50 np0005549474 podman[92601]: 2025-12-07 09:42:50.831005191 +0000 UTC m=+0.254703808 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.dotugk(active, since 1.50064s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  7 04:42:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0[74512]: 2025-12-07T09:42:50.842+0000 7ff1f2863640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v3: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e2 new map
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-07T09:42:50:843512+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:42:50.843467+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:50] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:50] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:42:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:50] ENGINE Client ('192.168.122.100', 44880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:50] ENGINE Client ('192.168.122.100', 44880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  7 04:42:50 np0005549474 systemd[1]: libpod-049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4.scope: Deactivated successfully.
Dec  7 04:42:50 np0005549474 podman[92258]: 2025-12-07 09:42:50.888043365 +0000 UTC m=+10.711534640 container died 049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4 (image=quay.io/ceph/ceph:v19, name=mystifying_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:42:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9fbf96399e133f893c95fb98d534bb3cac80d4b6de49dc76ead2947bbe25562d-merged.mount: Deactivated successfully.
Dec  7 04:42:50 np0005549474 podman[92601]: 2025-12-07 09:42:50.927088859 +0000 UTC m=+0.350787476 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:42:50 np0005549474 podman[92258]: 2025-12-07 09:42:50.936519171 +0000 UTC m=+10.760010406 container remove 049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4 (image=quay.io/ceph/ceph:v19, name=mystifying_lamport, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 04:42:50 np0005549474 systemd[1]: libpod-conmon-049a4071d799494bb9ff00d93c0b00eab982f224a9367e5cca7a02ed069a08c4.scope: Deactivated successfully.
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:50] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:50] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:42:50] ENGINE Bus STARTED
Dec  7 04:42:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:42:50] ENGINE Bus STARTED
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:51 np0005549474 python3[92728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.296296296 +0000 UTC m=+0.047313206 container create 06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c (image=quay.io/ceph/ceph:v19, name=optimistic_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:42:51 np0005549474 systemd[1]: Started libpod-conmon-06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c.scope.
Dec  7 04:42:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24cf028ee8e88df72b62c30a6f7d656e3a8db88902196ac1fb96a66dfafb6ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24cf028ee8e88df72b62c30a6f7d656e3a8db88902196ac1fb96a66dfafb6ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24cf028ee8e88df72b62c30a6f7d656e3a8db88902196ac1fb96a66dfafb6ed/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.275715486 +0000 UTC m=+0.026732426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.374063584 +0000 UTC m=+0.125080504 container init 06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c (image=quay.io/ceph/ceph:v19, name=optimistic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.380470455 +0000 UTC m=+0.131487355 container start 06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c (image=quay.io/ceph/ceph:v19, name=optimistic_lewin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.383961848 +0000 UTC m=+0.134978748 container attach 06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c (image=quay.io/ceph/ceph:v19, name=optimistic_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v5: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:51 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 04:42:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:51 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 optimistic_lewin[92810]: Scheduled mds.cephfs update...
Dec  7 04:42:51 np0005549474 systemd[1]: libpod-06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c.scope: Deactivated successfully.
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.744694329 +0000 UTC m=+0.495711229 container died 06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c (image=quay.io/ceph/ceph:v19, name=optimistic_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a24cf028ee8e88df72b62c30a6f7d656e3a8db88902196ac1fb96a66dfafb6ed-merged.mount: Deactivated successfully.
Dec  7 04:42:51 np0005549474 podman[92750]: 2025-12-07 09:42:51.782378157 +0000 UTC m=+0.533395067 container remove 06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c (image=quay.io/ceph/ceph:v19, name=optimistic_lewin, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:51 np0005549474 systemd[1]: libpod-conmon-06e37866a95b8502c92a3ba38097d66ab53f69a2a24ecaecf45c0dcf52429f6c.scope: Deactivated successfully.
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:50] ENGINE Bus STARTING
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:51 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.dotugk(active, since 2s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:52 np0005549474 python3[92942]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:52 np0005549474 podman[92969]: 2025-12-07 09:42:52.122619919 +0000 UTC m=+0.038604122 container create 75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc (image=quay.io/ceph/ceph:v19, name=vibrant_shockley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:42:52 np0005549474 systemd[1]: Started libpod-conmon-75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc.scope.
Dec  7 04:42:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d5dda078af981e5a24b724384de5259f207af32b770a8842ea90bc4440997c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d5dda078af981e5a24b724384de5259f207af32b770a8842ea90bc4440997c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d5dda078af981e5a24b724384de5259f207af32b770a8842ea90bc4440997c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:52 np0005549474 podman[92969]: 2025-12-07 09:42:52.176444778 +0000 UTC m=+0.092429001 container init 75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc (image=quay.io/ceph/ceph:v19, name=vibrant_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:42:52 np0005549474 podman[92969]: 2025-12-07 09:42:52.185776398 +0000 UTC m=+0.101760611 container start 75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc (image=quay.io/ceph/ceph:v19, name=vibrant_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:42:52 np0005549474 podman[92969]: 2025-12-07 09:42:52.189180799 +0000 UTC m=+0.105165012 container attach 75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc (image=quay.io/ceph/ceph:v19, name=vibrant_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:42:52 np0005549474 podman[92969]: 2025-12-07 09:42:52.106687373 +0000 UTC m=+0.022671576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14493 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:52 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:50] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:50] ENGINE Client ('192.168.122.100', 44880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:50] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:42:50] ENGINE Bus STARTED
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:42:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v7: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 37 pg[8.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.dotugk(active, since 4s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:53 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  7 04:42:54 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 38 pg[8.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:54 np0005549474 systemd[1]: libpod-75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc.scope: Deactivated successfully.
Dec  7 04:42:54 np0005549474 podman[92969]: 2025-12-07 09:42:54.450951974 +0000 UTC m=+2.366936227 container died 75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc (image=quay.io/ceph/ceph:v19, name=vibrant_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:54 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e6d5dda078af981e5a24b724384de5259f207af32b770a8842ea90bc4440997c-merged.mount: Deactivated successfully.
Dec  7 04:42:54 np0005549474 podman[92969]: 2025-12-07 09:42:54.498391242 +0000 UTC m=+2.414375445 container remove 75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc (image=quay.io/ceph/ceph:v19, name=vibrant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:54 np0005549474 systemd[1]: libpod-conmon-75abad1ff8f930fc7c66c8f80cb85ec8663630d32204d19b3b8e9a20410bf2bc.scope: Deactivated successfully.
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:54 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 python3[93901]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v10: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.dotugk(active, since 6s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:42:55 np0005549474 python3[94093]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100574.935837-37373-253274609894136/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=2eec074211d5644630d1561f0b2053eaf094bdc2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:42:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:55 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev bba86180-fcce-438d-b86f-c0d29b22ebbc (Updating node-exporter deployment (+3 -> 3))
Dec  7 04:42:55 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec  7 04:42:55 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec  7 04:42:56 np0005549474 python3[94193]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.077443943 +0000 UTC m=+0.035471369 container create 0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f (image=quay.io/ceph/ceph:v19, name=pedantic_cray, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:42:56 np0005549474 systemd[1]: Started libpod-conmon-0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f.scope.
Dec  7 04:42:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6748e374ae9966892e7d76479b944d6ad3a76532e473db657b0c12bfc99a260/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6748e374ae9966892e7d76479b944d6ad3a76532e473db657b0c12bfc99a260/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.145073431 +0000 UTC m=+0.103100887 container init 0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f (image=quay.io/ceph/ceph:v19, name=pedantic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.150411533 +0000 UTC m=+0.108438969 container start 0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f (image=quay.io/ceph/ceph:v19, name=pedantic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.153915367 +0000 UTC m=+0.111942803 container attach 0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f (image=quay.io/ceph/ceph:v19, name=pedantic_cray, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.061760244 +0000 UTC m=+0.019787680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:56 np0005549474 systemd[1]: Reloading.
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: Deploying daemon node-exporter.compute-0 on compute-0
Dec  7 04:42:56 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:42:56 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 04:42:56 np0005549474 systemd[1]: Reloading.
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3498920725' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  7 04:42:56 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:42:56 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:42:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3498920725' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.574791985 +0000 UTC m=+0.532819441 container died 0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f (image=quay.io/ceph/ceph:v19, name=pedantic_cray, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:56 np0005549474 systemd[1]: libpod-0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f.scope: Deactivated successfully.
Dec  7 04:42:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a6748e374ae9966892e7d76479b944d6ad3a76532e473db657b0c12bfc99a260-merged.mount: Deactivated successfully.
Dec  7 04:42:56 np0005549474 systemd[1]: Starting Ceph node-exporter.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:42:56 np0005549474 podman[94211]: 2025-12-07 09:42:56.761417873 +0000 UTC m=+0.719445319 container remove 0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f (image=quay.io/ceph/ceph:v19, name=pedantic_cray, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:56 np0005549474 systemd[1]: libpod-conmon-0fd1e7ea73fe0696f99d11aa77dc29c5cf4365f6a027bd4bc61a49bdbf69488f.scope: Deactivated successfully.
Dec  7 04:42:56 np0005549474 bash[94410]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec  7 04:42:57 np0005549474 ceph-mon[74516]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 04:42:57 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3498920725' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  7 04:42:57 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3498920725' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  7 04:42:57 np0005549474 bash[94410]: Getting image source signatures
Dec  7 04:42:57 np0005549474 bash[94410]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec  7 04:42:57 np0005549474 bash[94410]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec  7 04:42:57 np0005549474 bash[94410]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec  7 04:42:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:42:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v11: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec  7 04:42:57 np0005549474 python3[94453]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:57 np0005549474 bash[94410]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec  7 04:42:57 np0005549474 bash[94410]: Writing manifest to image destination
Dec  7 04:42:57 np0005549474 podman[94485]: 2025-12-07 09:42:57.884396405 +0000 UTC m=+0.319209702 container create d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:42:57 np0005549474 podman[94410]: 2025-12-07 09:42:57.913641506 +0000 UTC m=+1.016811196 container create 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:42:57 np0005549474 podman[94410]: 2025-12-07 09:42:57.8988112 +0000 UTC m=+1.001980910 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  7 04:42:57 np0005549474 systemd[1]: Started libpod-conmon-d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639.scope.
Dec  7 04:42:57 np0005549474 podman[94485]: 2025-12-07 09:42:57.866595708 +0000 UTC m=+0.301409025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:57 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4138741022625476ffee480f8812ef1f0fc1d613d43127b163b302cc3bc282/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4138741022625476ffee480f8812ef1f0fc1d613d43127b163b302cc3bc282/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:57 np0005549474 podman[94485]: 2025-12-07 09:42:57.957484438 +0000 UTC m=+0.392297765 container init d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:42:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c28058fad99ab5bbd8365567a6e944ffffc2f65993d0b9b6f830a38ef4a6c5/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:57 np0005549474 podman[94485]: 2025-12-07 09:42:57.964811594 +0000 UTC m=+0.399624891 container start d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:57 np0005549474 podman[94410]: 2025-12-07 09:42:57.969570991 +0000 UTC m=+1.072740691 container init 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:42:57 np0005549474 podman[94410]: 2025-12-07 09:42:57.975380126 +0000 UTC m=+1.078549826 container start 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:42:57 np0005549474 podman[94485]: 2025-12-07 09:42:57.976310621 +0000 UTC m=+0.411123918 container attach d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:42:57 np0005549474 bash[94410]: 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.981Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.981Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.985Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.985Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.985Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.985Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=arp
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=bcache
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=bonding
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=cpu
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=dmi
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=edac
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=entropy
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=filefd
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=netclass
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=netdev
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=netstat
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=nfs
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.986Z caller=node_exporter.go:117 level=info collector=nvme
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=os
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=pressure
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=rapl
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=selinux
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=softnet
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=stat
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=textfile
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=time
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=uname
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=xfs
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=node_exporter.go:117 level=info collector=zfs
Dec  7 04:42:57 np0005549474 systemd[1]: Started Ceph node-exporter.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  7 04:42:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0[94530]: ts=2025-12-07T09:42:57.987Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:58 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec  7 04:42:58 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 04:42:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896900130' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 04:42:58 np0005549474 recursing_bartik[94526]: 
Dec  7 04:42:58 np0005549474 recursing_bartik[94526]: {"fsid":"75f4c9fd-539a-5e17-b55a-0a12a4e2736c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":94,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1765100522,"num_in_osds":3,"osd_in_since":1765100495,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":70}],"num_pgs":70,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84094976,"bytes_avail":64327831552,"bytes_total":64411926528,"read_bytes_sec":30031,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-12-07T09:42:50:843512+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-12-07T09:42:18.986103+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.buauyv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.ntknug":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"bba86180-fcce-438d-b86f-c0d29b22ebbc":{"message":"Updating node-exporter deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  7 04:42:58 np0005549474 systemd[1]: libpod-d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639.scope: Deactivated successfully.
Dec  7 04:42:58 np0005549474 podman[94485]: 2025-12-07 09:42:58.387422648 +0000 UTC m=+0.822235945 container died d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:42:58 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2c4138741022625476ffee480f8812ef1f0fc1d613d43127b163b302cc3bc282-merged.mount: Deactivated successfully.
Dec  7 04:42:58 np0005549474 podman[94485]: 2025-12-07 09:42:58.425129056 +0000 UTC m=+0.859942343 container remove d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639 (image=quay.io/ceph/ceph:v19, name=recursing_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 04:42:58 np0005549474 systemd[1]: libpod-conmon-d5f83fdee1eca8872437b76a257f8a02157f9073c86378a6162c993992eeb639.scope: Deactivated successfully.
Dec  7 04:42:58 np0005549474 python3[94599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:58 np0005549474 podman[94600]: 2025-12-07 09:42:58.753264005 +0000 UTC m=+0.038338816 container create a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:42:58 np0005549474 systemd[1]: Started libpod-conmon-a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d.scope.
Dec  7 04:42:58 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:58 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61a427c2791f828d8b49346a262521de9e53a367b48729dbbbd75012a195438/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:58 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61a427c2791f828d8b49346a262521de9e53a367b48729dbbbd75012a195438/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:58 np0005549474 podman[94600]: 2025-12-07 09:42:58.820848601 +0000 UTC m=+0.105923442 container init a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:58 np0005549474 podman[94600]: 2025-12-07 09:42:58.825939037 +0000 UTC m=+0.111013848 container start a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:42:58 np0005549474 podman[94600]: 2025-12-07 09:42:58.829085071 +0000 UTC m=+0.114159882 container attach a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:42:58 np0005549474 podman[94600]: 2025-12-07 09:42:58.736000694 +0000 UTC m=+0.021075525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:42:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 04:42:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827221640' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 04:42:59 np0005549474 nervous_mestorf[94616]: 
Dec  7 04:42:59 np0005549474 nervous_mestorf[94616]: {"epoch":3,"fsid":"75f4c9fd-539a-5e17-b55a-0a12a4e2736c","modified":"2025-12-07T09:41:18.042048Z","created":"2025-12-07T09:39:05.386379Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec  7 04:42:59 np0005549474 nervous_mestorf[94616]: dumped monmap epoch 3
Dec  7 04:42:59 np0005549474 systemd[1]: libpod-a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d.scope: Deactivated successfully.
Dec  7 04:42:59 np0005549474 podman[94600]: 2025-12-07 09:42:59.242598953 +0000 UTC m=+0.527673764 container died a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:42:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e61a427c2791f828d8b49346a262521de9e53a367b48729dbbbd75012a195438-merged.mount: Deactivated successfully.
Dec  7 04:42:59 np0005549474 podman[94600]: 2025-12-07 09:42:59.279978322 +0000 UTC m=+0.565053143 container remove a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d (image=quay.io/ceph/ceph:v19, name=nervous_mestorf, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:42:59 np0005549474 systemd[1]: libpod-conmon-a5eb1a72970f4e757a18255bc68322d3a5aaad0fc0f43684643b1f8ba7420e8d.scope: Deactivated successfully.
Dec  7 04:42:59 np0005549474 ceph-mon[74516]: Deploying daemon node-exporter.compute-1 on compute-1
Dec  7 04:42:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v12: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Dec  7 04:42:59 np0005549474 python3[94677]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:42:59 np0005549474 podman[94678]: 2025-12-07 09:42:59.927808045 +0000 UTC m=+0.044448639 container create 5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3 (image=quay.io/ceph/ceph:v19, name=bold_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:42:59 np0005549474 systemd[1]: Started libpod-conmon-5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3.scope.
Dec  7 04:42:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:42:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b7aa7f4989020c81b1cb7538773f6460f8e1a824a5168fe3fe54ae873c9a37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b7aa7f4989020c81b1cb7538773f6460f8e1a824a5168fe3fe54ae873c9a37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:42:59 np0005549474 podman[94678]: 2025-12-07 09:42:59.903595959 +0000 UTC m=+0.020236573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:00 np0005549474 podman[94678]: 2025-12-07 09:43:00.029687779 +0000 UTC m=+0.146328383 container init 5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3 (image=quay.io/ceph/ceph:v19, name=bold_hamilton, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:43:00 np0005549474 podman[94678]: 2025-12-07 09:43:00.035292308 +0000 UTC m=+0.151932892 container start 5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3 (image=quay.io/ceph/ceph:v19, name=bold_hamilton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 04:43:00 np0005549474 podman[94678]: 2025-12-07 09:43:00.039053928 +0000 UTC m=+0.155694532 container attach 5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3 (image=quay.io/ceph/ceph:v19, name=bold_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:00 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec  7 04:43:00 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  7 04:43:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3080182648' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  7 04:43:00 np0005549474 bold_hamilton[94693]: [client.openstack]
Dec  7 04:43:00 np0005549474 bold_hamilton[94693]: #011key = AQASSzVpAAAAABAAHQ1Di7YjsYFnT8csFjJ07A==
Dec  7 04:43:00 np0005549474 bold_hamilton[94693]: #011caps mgr = "allow *"
Dec  7 04:43:00 np0005549474 bold_hamilton[94693]: #011caps mon = "profile rbd"
Dec  7 04:43:00 np0005549474 bold_hamilton[94693]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  7 04:43:00 np0005549474 systemd[1]: libpod-5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3.scope: Deactivated successfully.
Dec  7 04:43:00 np0005549474 podman[94678]: 2025-12-07 09:43:00.458591521 +0000 UTC m=+0.575232135 container died 5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3 (image=quay.io/ceph/ceph:v19, name=bold_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:43:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay-54b7aa7f4989020c81b1cb7538773f6460f8e1a824a5168fe3fe54ae873c9a37-merged.mount: Deactivated successfully.
Dec  7 04:43:00 np0005549474 podman[94678]: 2025-12-07 09:43:00.505318159 +0000 UTC m=+0.621958733 container remove 5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3 (image=quay.io/ceph/ceph:v19, name=bold_hamilton, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:43:00 np0005549474 systemd[1]: libpod-conmon-5364ba11e4d3c2cceabf3ef17a5c450abde9ed94f184fd2f6f5e3935e909d6a3.scope: Deactivated successfully.
Dec  7 04:43:01 np0005549474 ceph-mon[74516]: Deploying daemon node-exporter.compute-2 on compute-2
Dec  7 04:43:01 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/3080182648' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  7 04:43:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v13: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec  7 04:43:01 np0005549474 ansible-async_wrapper.py[94879]: Invoked with j491590541655 30 /home/zuul/.ansible/tmp/ansible-tmp-1765100581.442819-37445-187157267466116/AnsiballZ_command.py _
Dec  7 04:43:01 np0005549474 ansible-async_wrapper.py[94882]: Starting module and watcher
Dec  7 04:43:01 np0005549474 ansible-async_wrapper.py[94882]: Start watching 94883 (30)
Dec  7 04:43:01 np0005549474 ansible-async_wrapper.py[94883]: Start module (94883)
Dec  7 04:43:01 np0005549474 ansible-async_wrapper.py[94879]: Return async_wrapper task started.
Dec  7 04:43:02 np0005549474 python3[94884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.100526182 +0000 UTC m=+0.067131156 container create 7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b (image=quay.io/ceph/ceph:v19, name=eloquent_gauss, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:43:02 np0005549474 systemd[1]: Started libpod-conmon-7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b.scope.
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.059002512 +0000 UTC m=+0.025607536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:02 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea2ba66abdd92e2ff52b6ca5912b3b3957c751ae79494b52e13e0891abb2ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ea2ba66abdd92e2ff52b6ca5912b3b3957c751ae79494b52e13e0891abb2ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.171910859 +0000 UTC m=+0.138515793 container init 7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b (image=quay.io/ceph/ceph:v19, name=eloquent_gauss, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.178057614 +0000 UTC m=+0.144662548 container start 7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b (image=quay.io/ceph/ceph:v19, name=eloquent_gauss, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.180969692 +0000 UTC m=+0.147574636 container attach 7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b (image=quay.io/ceph/ceph:v19, name=eloquent_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:43:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:02 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 04:43:02 np0005549474 eloquent_gauss[94901]: 
Dec  7 04:43:02 np0005549474 eloquent_gauss[94901]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 04:43:02 np0005549474 systemd[1]: libpod-7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b.scope: Deactivated successfully.
Dec  7 04:43:02 np0005549474 conmon[94901]: conmon 7aea4e75d749e3b46608 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b.scope/container/memory.events
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.56410906 +0000 UTC m=+0.530713994 container died 7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b (image=quay.io/ceph/ceph:v19, name=eloquent_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:43:02 np0005549474 systemd[1]: var-lib-containers-storage-overlay-78ea2ba66abdd92e2ff52b6ca5912b3b3957c751ae79494b52e13e0891abb2ee-merged.mount: Deactivated successfully.
Dec  7 04:43:02 np0005549474 podman[94885]: 2025-12-07 09:43:02.737602397 +0000 UTC m=+0.704207331 container remove 7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b (image=quay.io/ceph/ceph:v19, name=eloquent_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:02 np0005549474 ansible-async_wrapper.py[94883]: Module complete (94883)
Dec  7 04:43:02 np0005549474 systemd[1]: libpod-conmon-7aea4e75d749e3b466085bd8d9ee01ecf93bb2288aad59a79f885b123a80fc0b.scope: Deactivated successfully.
Dec  7 04:43:03 np0005549474 python3[94985]: ansible-ansible.legacy.async_status Invoked with jid=j491590541655.94879 mode=status _async_dir=/root/.ansible_async
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:03 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev bba86180-fcce-438d-b86f-c0d29b22ebbc (Updating node-exporter deployment (+3 -> 3))
Dec  7 04:43:03 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event bba86180-fcce-438d-b86f-c0d29b22ebbc (Updating node-exporter deployment (+3 -> 3)) in 8 seconds
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v14: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s
Dec  7 04:43:03 np0005549474 python3[95034]: ansible-ansible.legacy.async_status Invoked with jid=j491590541655.94879 mode=cleanup _async_dir=/root/.ansible_async
Dec  7 04:43:03 np0005549474 podman[95125]: 2025-12-07 09:43:03.831906443 +0000 UTC m=+0.039684442 container create e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:43:03 np0005549474 systemd[1]: Started libpod-conmon-e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e.scope.
Dec  7 04:43:03 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:03 np0005549474 podman[95125]: 2025-12-07 09:43:03.812976907 +0000 UTC m=+0.020754956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:03 np0005549474 podman[95125]: 2025-12-07 09:43:03.963974092 +0000 UTC m=+0.171752171 container init e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_haslett, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:03 np0005549474 podman[95125]: 2025-12-07 09:43:03.97437025 +0000 UTC m=+0.182148289 container start e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_haslett, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 04:43:03 np0005549474 podman[95125]: 2025-12-07 09:43:03.97774068 +0000 UTC m=+0.185518679 container attach e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_haslett, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:03 np0005549474 clever_haslett[95166]: 167 167
Dec  7 04:43:03 np0005549474 systemd[1]: libpod-e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e.scope: Deactivated successfully.
Dec  7 04:43:03 np0005549474 podman[95125]: 2025-12-07 09:43:03.981164442 +0000 UTC m=+0.188942491 container died e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:43:04 np0005549474 python3[95168]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-435119615f2ca9bc127a1ae063e6b0507d19bec7ad1d734d766b3e51e0517b04-merged.mount: Deactivated successfully.
Dec  7 04:43:04 np0005549474 podman[95125]: 2025-12-07 09:43:04.027529461 +0000 UTC m=+0.235307460 container remove e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:43:04 np0005549474 systemd[1]: libpod-conmon-e1e4d6f0934e8940d56d244921c5f9bee4df028aee4e2172f1c2d7747a7ea06e.scope: Deactivated successfully.
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.063657827 +0000 UTC m=+0.043498214 container create 7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d (image=quay.io/ceph/ceph:v19, name=determined_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 04:43:04 np0005549474 systemd[1]: Started libpod-conmon-7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d.scope.
Dec  7 04:43:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b90e557090272f66dcd343a1f76e4114cc95b42ecfe67c31dd3ab6cc570acd6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b90e557090272f66dcd343a1f76e4114cc95b42ecfe67c31dd3ab6cc570acd6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.127937145 +0000 UTC m=+0.107777572 container init 7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d (image=quay.io/ceph/ceph:v19, name=determined_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.133303278 +0000 UTC m=+0.113143675 container start 7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d (image=quay.io/ceph/ceph:v19, name=determined_thompson, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.040728693 +0000 UTC m=+0.020569090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.136411681 +0000 UTC m=+0.116252088 container attach 7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d (image=quay.io/ceph/ceph:v19, name=determined_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.206301579 +0000 UTC m=+0.047424388 container create 76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 04:43:04 np0005549474 systemd[1]: Started libpod-conmon-76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8.scope.
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.181540307 +0000 UTC m=+0.022663126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338114a3c716230798da943a84b06474468223462f845acdedb8a0c406dcc4a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338114a3c716230798da943a84b06474468223462f845acdedb8a0c406dcc4a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338114a3c716230798da943a84b06474468223462f845acdedb8a0c406dcc4a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338114a3c716230798da943a84b06474468223462f845acdedb8a0c406dcc4a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338114a3c716230798da943a84b06474468223462f845acdedb8a0c406dcc4a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.324925789 +0000 UTC m=+0.166048578 container init 76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.333677363 +0000 UTC m=+0.174800132 container start 76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_greider, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.336775535 +0000 UTC m=+0.177898304 container attach 76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_greider, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 04:43:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:43:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 04:43:04 np0005549474 determined_thompson[95201]: 
Dec  7 04:43:04 np0005549474 determined_thompson[95201]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 04:43:04 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 10 completed events
Dec  7 04:43:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:43:04 np0005549474 systemd[1]: libpod-7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d.scope: Deactivated successfully.
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.497753498 +0000 UTC m=+0.477593935 container died 7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d (image=quay.io/ceph/ceph:v19, name=determined_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:43:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0b90e557090272f66dcd343a1f76e4114cc95b42ecfe67c31dd3ab6cc570acd6-merged.mount: Deactivated successfully.
Dec  7 04:43:04 np0005549474 podman[95184]: 2025-12-07 09:43:04.543750337 +0000 UTC m=+0.523590724 container remove 7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d (image=quay.io/ceph/ceph:v19, name=determined_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 04:43:04 np0005549474 systemd[1]: libpod-conmon-7fd626fc7146f7c4d9847961c8505e921fc063a222e084a65cefb45715c8224d.scope: Deactivated successfully.
Dec  7 04:43:04 np0005549474 nice_greider[95245]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:43:04 np0005549474 nice_greider[95245]: --> All data devices are unavailable
Dec  7 04:43:04 np0005549474 systemd[1]: libpod-76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8.scope: Deactivated successfully.
Dec  7 04:43:04 np0005549474 conmon[95245]: conmon 76dbcaa9ce4c34c06629 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8.scope/container/memory.events
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.711394928 +0000 UTC m=+0.552517707 container died 76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec  7 04:43:04 np0005549474 podman[95210]: 2025-12-07 09:43:04.760167811 +0000 UTC m=+0.601290580 container remove 76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:43:04 np0005549474 systemd[1]: libpod-conmon-76dbcaa9ce4c34c066294ab4a8d684758f0c976e9997c4e77f12ce0b04c13bb8.scope: Deactivated successfully.
Dec  7 04:43:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-338114a3c716230798da943a84b06474468223462f845acdedb8a0c406dcc4a0-merged.mount: Deactivated successfully.
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.289150448 +0000 UTC m=+0.042477946 container create 8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:05 np0005549474 systemd[1]: Started libpod-conmon-8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a.scope.
Dec  7 04:43:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.33635977 +0000 UTC m=+0.089687288 container init 8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.343619184 +0000 UTC m=+0.096946682 container start 8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.346190672 +0000 UTC m=+0.099518170 container attach 8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:05 np0005549474 jolly_cerf[95418]: 167 167
Dec  7 04:43:05 np0005549474 systemd[1]: libpod-8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a.scope: Deactivated successfully.
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.348509394 +0000 UTC m=+0.101836912 container died 8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.269885543 +0000 UTC m=+0.023213071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5586e805d185b92ecc497b8ade233ff3f31db2c33db5cca1401bbb81a10e6733-merged.mount: Deactivated successfully.
Dec  7 04:43:05 np0005549474 podman[95377]: 2025-12-07 09:43:05.384053334 +0000 UTC m=+0.137380832 container remove 8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:43:05 np0005549474 systemd[1]: libpod-conmon-8a4c1cd9a83de4bee6beaa5aa3ed982c76fc614214f3d9b32e3177d4555ab74a.scope: Deactivated successfully.
Dec  7 04:43:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v15: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Dec  7 04:43:05 np0005549474 python3[95420]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:05 np0005549474 podman[95439]: 2025-12-07 09:43:05.503491567 +0000 UTC m=+0.039883208 container create e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00 (image=quay.io/ceph/ceph:v19, name=crazy_saha, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:43:05 np0005549474 systemd[1]: Started libpod-conmon-e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00.scope.
Dec  7 04:43:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510cd9eba80564e9f419267d05c6f333b2edc70ca1d45c92f7b8768ffa1d836/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5510cd9eba80564e9f419267d05c6f333b2edc70ca1d45c92f7b8768ffa1d836/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:05 np0005549474 podman[95439]: 2025-12-07 09:43:05.484089948 +0000 UTC m=+0.020481609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:05 np0005549474 podman[95450]: 2025-12-07 09:43:05.526737198 +0000 UTC m=+0.043381491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:05 np0005549474 podman[95450]: 2025-12-07 09:43:05.62338422 +0000 UTC m=+0.140028453 container create 61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:05 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:05 np0005549474 podman[95439]: 2025-12-07 09:43:05.668531648 +0000 UTC m=+0.204923289 container init e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00 (image=quay.io/ceph/ceph:v19, name=crazy_saha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 04:43:05 np0005549474 podman[95439]: 2025-12-07 09:43:05.673597012 +0000 UTC m=+0.209988643 container start e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00 (image=quay.io/ceph/ceph:v19, name=crazy_saha, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:43:05 np0005549474 podman[95439]: 2025-12-07 09:43:05.677375904 +0000 UTC m=+0.213767545 container attach e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00 (image=quay.io/ceph/ceph:v19, name=crazy_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:43:05 np0005549474 systemd[1]: Started libpod-conmon-61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366.scope.
Dec  7 04:43:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529e6aa9a215b72bd9130864b4c10f5894cda16c7c31833179d0a4337615a648/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529e6aa9a215b72bd9130864b4c10f5894cda16c7c31833179d0a4337615a648/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529e6aa9a215b72bd9130864b4c10f5894cda16c7c31833179d0a4337615a648/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529e6aa9a215b72bd9130864b4c10f5894cda16c7c31833179d0a4337615a648/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:05 np0005549474 podman[95450]: 2025-12-07 09:43:05.717057214 +0000 UTC m=+0.233701477 container init 61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_brahmagupta, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:43:05 np0005549474 podman[95450]: 2025-12-07 09:43:05.724299667 +0000 UTC m=+0.240943910 container start 61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:43:05 np0005549474 podman[95450]: 2025-12-07 09:43:05.727305488 +0000 UTC m=+0.243949741 container attach 61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]: {
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:    "0": [
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:        {
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "devices": [
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "/dev/loop3"
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            ],
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "lv_name": "ceph_lv0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "lv_size": "21470642176",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "name": "ceph_lv0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "tags": {
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.cluster_name": "ceph",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.crush_device_class": "",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.encrypted": "0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.osd_id": "0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.type": "block",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.vdo": "0",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:                "ceph.with_tpm": "0"
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            },
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "type": "block",
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:            "vg_name": "ceph_vg0"
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:        }
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]:    ]
Dec  7 04:43:06 np0005549474 sad_brahmagupta[95479]: }
Dec  7 04:43:06 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14541 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 04:43:06 np0005549474 crazy_saha[95471]: 
Dec  7 04:43:06 np0005549474 crazy_saha[95471]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  7 04:43:06 np0005549474 systemd[1]: libpod-61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366.scope: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95450]: 2025-12-07 09:43:06.027720107 +0000 UTC m=+0.544364390 container died 61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 04:43:06 np0005549474 systemd[1]: libpod-e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00.scope: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95439]: 2025-12-07 09:43:06.03682409 +0000 UTC m=+0.573215741 container died e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00 (image=quay.io/ceph/ceph:v19, name=crazy_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:43:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5510cd9eba80564e9f419267d05c6f333b2edc70ca1d45c92f7b8768ffa1d836-merged.mount: Deactivated successfully.
Dec  7 04:43:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-529e6aa9a215b72bd9130864b4c10f5894cda16c7c31833179d0a4337615a648-merged.mount: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95439]: 2025-12-07 09:43:06.088768619 +0000 UTC m=+0.625160270 container remove e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00 (image=quay.io/ceph/ceph:v19, name=crazy_saha, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:43:06 np0005549474 systemd[1]: libpod-conmon-e2a6668d8f0d3661df18f8d994f0ebacd9d2290eaf1bed1fedcb20ae0b072b00.scope: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95450]: 2025-12-07 09:43:06.107755455 +0000 UTC m=+0.624399698 container remove 61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:43:06 np0005549474 systemd[1]: libpod-conmon-61de39ba8d43bbe256fad33888a0c26f82cafb60bfd32943029e55810d323366.scope: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.622741229 +0000 UTC m=+0.037425981 container create 6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chatelet, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 04:43:06 np0005549474 systemd[1]: Started libpod-conmon-6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc.scope.
Dec  7 04:43:06 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.696354896 +0000 UTC m=+0.111039688 container init 6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.605333614 +0000 UTC m=+0.020018386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.702770518 +0000 UTC m=+0.117455270 container start 6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.70621859 +0000 UTC m=+0.120903362 container attach 6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chatelet, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 04:43:06 np0005549474 goofy_chatelet[95637]: 167 167
Dec  7 04:43:06 np0005549474 systemd[1]: libpod-6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc.scope: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.707441242 +0000 UTC m=+0.122125994 container died 6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:43:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f3edbd2ffe5028a041118e5e1f129b871e4d787ca5e3bebc568c18dd105f0d96-merged.mount: Deactivated successfully.
Dec  7 04:43:06 np0005549474 podman[95621]: 2025-12-07 09:43:06.736779636 +0000 UTC m=+0.151464378 container remove 6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chatelet, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:43:06 np0005549474 systemd[1]: libpod-conmon-6c911ebec22d2747eedd93797b6be66790e648214b425118e5c3c8ffef3cbcbc.scope: Deactivated successfully.
Dec  7 04:43:06 np0005549474 ansible-async_wrapper.py[94882]: Done in kid B.
Dec  7 04:43:06 np0005549474 podman[95682]: 2025-12-07 09:43:06.906657326 +0000 UTC m=+0.057890708 container create dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:43:06 np0005549474 systemd[1]: Started libpod-conmon-dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f.scope.
Dec  7 04:43:06 np0005549474 podman[95682]: 2025-12-07 09:43:06.884618918 +0000 UTC m=+0.035852330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:07 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:07 np0005549474 python3[95697]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd29829338db4bd6b255d4d1e1f81f1d606c298285246fe769a718300709755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd29829338db4bd6b255d4d1e1f81f1d606c298285246fe769a718300709755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd29829338db4bd6b255d4d1e1f81f1d606c298285246fe769a718300709755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd29829338db4bd6b255d4d1e1f81f1d606c298285246fe769a718300709755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:07 np0005549474 podman[95682]: 2025-12-07 09:43:07.042089856 +0000 UTC m=+0.193323308 container init dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:07 np0005549474 podman[95682]: 2025-12-07 09:43:07.052177595 +0000 UTC m=+0.203411007 container start dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:43:07 np0005549474 podman[95682]: 2025-12-07 09:43:07.056045319 +0000 UTC m=+0.207278721 container attach dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.080355408 +0000 UTC m=+0.056270075 container create 2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca (image=quay.io/ceph/ceph:v19, name=hopeful_colden, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  7 04:43:07 np0005549474 systemd[1]: Started libpod-conmon-2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca.scope.
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.052146524 +0000 UTC m=+0.028061291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:07 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482e65b09a1530372599be9fdc3a9cb24b79d84e824841ba4324d2fa33a38ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4482e65b09a1530372599be9fdc3a9cb24b79d84e824841ba4324d2fa33a38ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.169832169 +0000 UTC m=+0.145746926 container init 2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca (image=quay.io/ceph/ceph:v19, name=hopeful_colden, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.180067133 +0000 UTC m=+0.155981830 container start 2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca (image=quay.io/ceph/ceph:v19, name=hopeful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.184562684 +0000 UTC m=+0.160477591 container attach 2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca (image=quay.io/ceph/ceph:v19, name=hopeful_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v16: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Dec  7 04:43:07 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.14547 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 04:43:07 np0005549474 hopeful_colden[95723]: 
Dec  7 04:43:07 np0005549474 hopeful_colden[95723]: [{"container_id": "3282b59c6d2b", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.10%", "created": "2025-12-07T09:39:49.868165Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-07T09:42:51.207429Z", "memory_usage": 7786725, "ports": [], "service_name": "crash", "started": "2025-12-07T09:39:49.624560Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@crash.compute-0", "version": "19.2.3"}, {"container_id": "0adb3b962f9b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.46%", "created": "2025-12-07T09:40:29.196460Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-07T09:42:51.214964Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2025-12-07T09:40:29.046702Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@crash.compute-1", "version": "19.2.3"}, {"container_id": "848c0f719dd8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.31%", "created": "2025-12-07T09:41:32.297177Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-07T09:42:51.413741Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-12-07T09:41:32.175529Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@crash.compute-2", "version": "19.2.3"}, {"container_id": "7d74b23a9f56", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "23.97%", "created": "2025-12-07T09:39:11.746168Z", "daemon_id": "compute-0.dotugk", "daemon_name": "mgr.compute-0.dotugk", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-07T09:42:51.207327Z", "memory_usage": 540436070, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-07T09:39:11.646979Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mgr.compute-0.dotugk", "version": "19.2.3"}, {"container_id": "786fc7fc7d21", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "30.61%", "created": "2025-12-07T09:41:30.301988Z", "daemon_id": "compute-1.buauyv", "daemon_name": "mgr.compute-1.buauyv", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-07T09:42:51.215326Z", "memory_usage": 503106764, "ports": [8765], "service_name": "mgr", "started": "2025-12-07T09:41:30.210312Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mgr.compute-1.buauyv", "version": "19.2.3"}, {"container_id": "4525f25b1df5", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "26.91%", "created": "2025-12-07T09:41:19.298724Z", "daemon_id": "compute-2.ntknug", "daemon_name": "mgr.compute-2.ntknug", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-07T09:42:51.413673Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2025-12-07T09:41:19.148113Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mgr.compute-2.ntknug", "version": "19.2.3"}, {"container_id": "25a7da5f7682", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.25%", "created": "2025-12-07T09:39:07.712062Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-07T09:42:51.207222Z", "memory_request": 2147483648, "memory_usage": 59611545, "ports": [], "service_name": "mon", "started": "2025-12-07T09:39:09.816358Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mon.compute-0", "version": "19.2.3"}, {"container_id": "e0f72a5bcd8e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.37%", "created": "2025-12-07T09:41:13.772564Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-07T09:42:51.215192Z", "memory_request": 2147483648, "memory_usage": 45906657, "ports": [], "service_name": "mon", "started": "2025-12-07T09:41:13.626105Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mon.compute-1", "version": "19.2.3"}, {"container_id": "ef383f8f4cdb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.04%", "created": "2025-12-07T09:41:11.988420Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-07T09:42:51.413576Z", "memory_request": 2147483648, "memory_usage": 43840962, "ports": [], "service_name": "mon", "started": "2025-12-07T09:41:11.846733Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2025-12-07T09:42:58.053284Z daemon:node-exporter.compu
Dec  7 04:43:07 np0005549474 systemd[1]: libpod-2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca.scope: Deactivated successfully.
Dec  7 04:43:07 np0005549474 conmon[95723]: conmon 2451d30f30925f4e05f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca.scope/container/memory.events
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.561096996 +0000 UTC m=+0.537011663 container died 2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca (image=quay.io/ceph/ceph:v19, name=hopeful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:43:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4482e65b09a1530372599be9fdc3a9cb24b79d84e824841ba4324d2fa33a38ff-merged.mount: Deactivated successfully.
Dec  7 04:43:07 np0005549474 podman[95706]: 2025-12-07 09:43:07.599964885 +0000 UTC m=+0.575879552 container remove 2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca (image=quay.io/ceph/ceph:v19, name=hopeful_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:43:07 np0005549474 systemd[1]: libpod-conmon-2451d30f30925f4e05f9ba9c2af7212154df11050db82243ec6e2b10f0e83eca.scope: Deactivated successfully.
Dec  7 04:43:07 np0005549474 lvm[95829]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:43:07 np0005549474 lvm[95829]: VG ceph_vg0 finished
Dec  7 04:43:07 np0005549474 flamboyant_tu[95703]: {}
Dec  7 04:43:07 np0005549474 rsyslogd[1010]: message too long (11753) with configured size 8096, begin of message is: [{"container_id": "3282b59c6d2b", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  7 04:43:07 np0005549474 systemd[1]: libpod-dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f.scope: Deactivated successfully.
Dec  7 04:43:07 np0005549474 systemd[1]: libpod-dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f.scope: Consumed 1.197s CPU time.
Dec  7 04:43:07 np0005549474 podman[95682]: 2025-12-07 09:43:07.790162908 +0000 UTC m=+0.941396370 container died dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_tu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 04:43:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1bd29829338db4bd6b255d4d1e1f81f1d606c298285246fe769a718300709755-merged.mount: Deactivated successfully.
Dec  7 04:43:07 np0005549474 podman[95682]: 2025-12-07 09:43:07.833457955 +0000 UTC m=+0.984691327 container remove dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:43:07 np0005549474 systemd[1]: libpod-conmon-dd77b1995ecd9903689bc5f6f0f80db14cb75fed60ceb5f99c3fe0e225ec003f.scope: Deactivated successfully.
Dec  7 04:43:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:43:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:08 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 9f754fd6-06d3-4c8f-8a5e-dc1b741912ec (Updating rgw.rgw deployment (+3 -> 3))
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:08 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.httxcl on compute-2
Dec  7 04:43:08 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.httxcl on compute-2
Dec  7 04:43:08 np0005549474 python3[95869]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:08 np0005549474 podman[95870]: 2025-12-07 09:43:08.765407522 +0000 UTC m=+0.068831101 container create baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f (image=quay.io/ceph/ceph:v19, name=gracious_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:08 np0005549474 systemd[1]: Started libpod-conmon-baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f.scope.
Dec  7 04:43:08 np0005549474 podman[95870]: 2025-12-07 09:43:08.743542127 +0000 UTC m=+0.046965696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:08 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1063aac70b44e5abc685e951bcfe6fb29931174d4bf007e6374f412ab8502621/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1063aac70b44e5abc685e951bcfe6fb29931174d4bf007e6374f412ab8502621/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:08 np0005549474 podman[95870]: 2025-12-07 09:43:08.865725053 +0000 UTC m=+0.169148702 container init baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f (image=quay.io/ceph/ceph:v19, name=gracious_rhodes, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:43:08 np0005549474 podman[95870]: 2025-12-07 09:43:08.87874719 +0000 UTC m=+0.182170759 container start baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f (image=quay.io/ceph/ceph:v19, name=gracious_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:43:08 np0005549474 podman[95870]: 2025-12-07 09:43:08.882350507 +0000 UTC m=+0.185774136 container attach baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f (image=quay.io/ceph/ceph:v19, name=gracious_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223296242' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 04:43:09 np0005549474 gracious_rhodes[95886]: 
Dec  7 04:43:09 np0005549474 gracious_rhodes[95886]: {"fsid":"75f4c9fd-539a-5e17-b55a-0a12a4e2736c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":105,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1765100522,"num_in_osds":3,"osd_in_since":1765100495,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":70}],"num_pgs":70,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84144128,"bytes_avail":64327782400,"bytes_total":64411926528,"read_bytes_sec":15015,"write_bytes_sec":0,"read_op_per_sec":4,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2025-12-07T09:42:50:843512+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-12-07T09:42:18.986103+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.buauyv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.ntknug":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  7 04:43:09 np0005549474 systemd[1]: libpod-baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f.scope: Deactivated successfully.
Dec  7 04:43:09 np0005549474 podman[95870]: 2025-12-07 09:43:09.362732515 +0000 UTC m=+0.666156074 container died baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f (image=quay.io/ceph/ceph:v19, name=gracious_rhodes, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1063aac70b44e5abc685e951bcfe6fb29931174d4bf007e6374f412ab8502621-merged.mount: Deactivated successfully.
Dec  7 04:43:09 np0005549474 podman[95870]: 2025-12-07 09:43:09.403954837 +0000 UTC m=+0.707378356 container remove baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f (image=quay.io/ceph/ceph:v19, name=gracious_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:43:09 np0005549474 systemd[1]: libpod-conmon-baabb0785cab978d217f8c47bfb3da393b8649d3523a0a6a7d9b42985a2aac8f.scope: Deactivated successfully.
Dec  7 04:43:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v17: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: Deploying daemon rgw.rgw.compute-2.httxcl on compute-2
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:09 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.cefzmy on compute-1
Dec  7 04:43:09 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.cefzmy on compute-1
Dec  7 04:43:10 np0005549474 python3[95950]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:10 np0005549474 podman[95951]: 2025-12-07 09:43:10.607143052 +0000 UTC m=+0.045530537 container create 94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91 (image=quay.io/ceph/ceph:v19, name=vigilant_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:43:10 np0005549474 systemd[1]: Started libpod-conmon-94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91.scope.
Dec  7 04:43:10 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9c238d6277bb4233f5d76084a67bec035bc255802804ac328e8a1c5fc7fd27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9c238d6277bb4233f5d76084a67bec035bc255802804ac328e8a1c5fc7fd27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:10 np0005549474 podman[95951]: 2025-12-07 09:43:10.588396722 +0000 UTC m=+0.026784227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:10 np0005549474 podman[95951]: 2025-12-07 09:43:10.683493204 +0000 UTC m=+0.121880709 container init 94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91 (image=quay.io/ceph/ceph:v19, name=vigilant_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:10 np0005549474 podman[95951]: 2025-12-07 09:43:10.689567996 +0000 UTC m=+0.127955471 container start 94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91 (image=quay.io/ceph/ceph:v19, name=vigilant_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:43:10 np0005549474 podman[95951]: 2025-12-07 09:43:10.717524783 +0000 UTC m=+0.155912298 container attach 94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91 (image=quay.io/ceph/ceph:v19, name=vigilant_roentgen, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  7 04:43:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 40 pg[9.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec  7 04:43:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168838345' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 04:43:11 np0005549474 vigilant_roentgen[95966]: 
Dec  7 04:43:11 np0005549474 systemd[1]: libpod-94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91.scope: Deactivated successfully.
Dec  7 04:43:11 np0005549474 conmon[95966]: conmon 94bd425774e14abd3992 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91.scope/container/memory.events
Dec  7 04:43:11 np0005549474 vigilant_roentgen[95966]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.dotugk/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.buauyv/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.ntknug/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.cefzmy","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.httxcl","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  7 04:43:11 np0005549474 podman[95951]: 2025-12-07 09:43:11.08710906 +0000 UTC m=+0.525496555 container died 94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91 (image=quay.io/ceph/ceph:v19, name=vigilant_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec  7 04:43:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-db9c238d6277bb4233f5d76084a67bec035bc255802804ac328e8a1c5fc7fd27-merged.mount: Deactivated successfully.
Dec  7 04:43:11 np0005549474 podman[95951]: 2025-12-07 09:43:11.128492517 +0000 UTC m=+0.566880002 container remove 94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91 (image=quay.io/ceph/ceph:v19, name=vigilant_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 04:43:11 np0005549474 systemd[1]: libpod-conmon-94bd425774e14abd3992b7f15389848fc128de00af1b27ba5df0c5ce74aa5b91.scope: Deactivated successfully.
Dec  7 04:43:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v19: 71 pgs: 1 unknown, 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:11 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.kbsleq on compute-0
Dec  7 04:43:11 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.kbsleq on compute-0
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: Deploying daemon rgw.rgw.compute-1.cefzmy on compute-1
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.102:0/3796621305' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  7 04:43:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  7 04:43:11 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 41 pg[9.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:12 np0005549474 python3[96080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.177762368 +0000 UTC m=+0.052294669 container create e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce (image=quay.io/ceph/ceph:v19, name=serene_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:43:12 np0005549474 systemd[1]: Started libpod-conmon-e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce.scope.
Dec  7 04:43:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e93235a6deede68e71752489147e09104832168df728aab10090096f39116c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e93235a6deede68e71752489147e09104832168df728aab10090096f39116c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.15947925 +0000 UTC m=+0.034011601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.25901941 +0000 UTC m=+0.133551731 container init e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce (image=quay.io/ceph/ceph:v19, name=serene_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.263989383 +0000 UTC m=+0.138521684 container start e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce (image=quay.io/ceph/ceph:v19, name=serene_moser, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.267450075 +0000 UTC m=+0.141982396 container attach e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce (image=quay.io/ceph/ceph:v19, name=serene_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.271922855 +0000 UTC m=+0.038669605 container create b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jennings, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:12 np0005549474 systemd[1]: Started libpod-conmon-b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062.scope.
Dec  7 04:43:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.325089786 +0000 UTC m=+0.091836616 container init b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jennings, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.334061216 +0000 UTC m=+0.100807996 container start b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:43:12 np0005549474 festive_jennings[96161]: 167 167
Dec  7 04:43:12 np0005549474 systemd[1]: libpod-b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062.scope: Deactivated successfully.
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.3383713 +0000 UTC m=+0.105118050 container attach b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.338726291 +0000 UTC m=+0.105473031 container died b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.255742722 +0000 UTC m=+0.022489512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-64b1a0e600d062dd27e6c5d7710141a1713cff838399e0822bf8bb5a149cffd7-merged.mount: Deactivated successfully.
Dec  7 04:43:12 np0005549474 podman[96143]: 2025-12-07 09:43:12.378141924 +0000 UTC m=+0.144888674 container remove b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:12 np0005549474 systemd[1]: libpod-conmon-b27042284838adc1a98e05b8acec8e55a017a06cf51e071e1eed7697020a6062.scope: Deactivated successfully.
Dec  7 04:43:12 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:12 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:12 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764855908' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  7 04:43:12 np0005549474 serene_moser[96141]: mimic
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.629377208 +0000 UTC m=+0.503909519 container died e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce (image=quay.io/ceph/ceph:v19, name=serene_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:43:12 np0005549474 systemd[1]: libpod-e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce.scope: Deactivated successfully.
Dec  7 04:43:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e2e93235a6deede68e71752489147e09104832168df728aab10090096f39116c-merged.mount: Deactivated successfully.
Dec  7 04:43:12 np0005549474 podman[96099]: 2025-12-07 09:43:12.686350031 +0000 UTC m=+0.560882342 container remove e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce (image=quay.io/ceph/ceph:v19, name=serene_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 04:43:12 np0005549474 systemd[1]: libpod-conmon-e38107629bdd21845db071b63e8e439915e0a98285630eb7bf36d98e32e29fce.scope: Deactivated successfully.
Dec  7 04:43:12 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: Deploying daemon rgw.rgw.compute-0.kbsleq on compute-0
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  7 04:43:12 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:12 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  7 04:43:12 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  7 04:43:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 04:43:13 np0005549474 systemd[1]: Starting Ceph rgw.rgw.compute-0.kbsleq for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:43:13 np0005549474 podman[96334]: 2025-12-07 09:43:13.200522231 +0000 UTC m=+0.035913591 container create 927a29b532efb619a23e959d90ea2003f8e4990350873a0b2834ad0d1dc072a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-0-kbsleq, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 04:43:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa75fdc8acc6604240ce97cb3ca0adaee8c1a0c9320caa55ae6a5fa1e001f00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa75fdc8acc6604240ce97cb3ca0adaee8c1a0c9320caa55ae6a5fa1e001f00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa75fdc8acc6604240ce97cb3ca0adaee8c1a0c9320caa55ae6a5fa1e001f00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa75fdc8acc6604240ce97cb3ca0adaee8c1a0c9320caa55ae6a5fa1e001f00/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.kbsleq supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:13 np0005549474 podman[96334]: 2025-12-07 09:43:13.259408685 +0000 UTC m=+0.094800085 container init 927a29b532efb619a23e959d90ea2003f8e4990350873a0b2834ad0d1dc072a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-0-kbsleq, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:43:13 np0005549474 podman[96334]: 2025-12-07 09:43:13.264546193 +0000 UTC m=+0.099937563 container start 927a29b532efb619a23e959d90ea2003f8e4990350873a0b2834ad0d1dc072a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-0-kbsleq, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:43:13 np0005549474 bash[96334]: 927a29b532efb619a23e959d90ea2003f8e4990350873a0b2834ad0d1dc072a0
Dec  7 04:43:13 np0005549474 podman[96334]: 2025-12-07 09:43:13.184173344 +0000 UTC m=+0.019564744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:13 np0005549474 systemd[1]: Started Ceph rgw.rgw.compute-0.kbsleq for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:43:13 np0005549474 radosgw[96353]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:43:13 np0005549474 radosgw[96353]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec  7 04:43:13 np0005549474 radosgw[96353]: framework: beast
Dec  7 04:43:13 np0005549474 radosgw[96353]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  7 04:43:13 np0005549474 radosgw[96353]: init_numa not setting numa affinity
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 9f754fd6-06d3-4c8f-8a5e-dc1b741912ec (Updating rgw.rgw deployment (+3 -> 3))
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 9f754fd6-06d3-4c8f-8a5e-dc1b741912ec (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 7ad0535e-5ec1-462b-acbd-3aa08aaa497a (Updating mds.cephfs deployment (+3 -> 3))
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.rxtsyx on compute-2
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.rxtsyx on compute-2
Dec  7 04:43:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v22: 72 pgs: 2 unknown, 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:43:13 np0005549474 python3[96965]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:13 np0005549474 podman[96966]: 2025-12-07 09:43:13.640618003 +0000 UTC m=+0.040752670 container create 6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874 (image=quay.io/ceph/ceph:v19, name=recursing_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:43:13 np0005549474 systemd[1]: Started libpod-conmon-6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874.scope.
Dec  7 04:43:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c00689fb4c619fba35fa3eaf9cb7b63ec69d369393f6c05228d178af8dea3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c00689fb4c619fba35fa3eaf9cb7b63ec69d369393f6c05228d178af8dea3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:13 np0005549474 podman[96966]: 2025-12-07 09:43:13.713642075 +0000 UTC m=+0.113776752 container init 6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874 (image=quay.io/ceph/ceph:v19, name=recursing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:43:13 np0005549474 podman[96966]: 2025-12-07 09:43:13.62290324 +0000 UTC m=+0.023037927 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:13 np0005549474 podman[96966]: 2025-12-07 09:43:13.71945067 +0000 UTC m=+0.119585337 container start 6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874 (image=quay.io/ceph/ceph:v19, name=recursing_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:43:13 np0005549474 podman[96966]: 2025-12-07 09:43:13.723023086 +0000 UTC m=+0.123157753 container attach 6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874 (image=quay.io/ceph/ceph:v19, name=recursing_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 04:43:13 np0005549474 ceph-mon[74516]: Deploying daemon mds.cephfs.compute-2.rxtsyx on compute-2
Dec  7 04:43:14 np0005549474 recursing_dhawan[96981]: 
Dec  7 04:43:14 np0005549474 systemd[1]: libpod-6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874.scope: Deactivated successfully.
Dec  7 04:43:14 np0005549474 recursing_dhawan[96981]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":9}}
Dec  7 04:43:14 np0005549474 podman[96966]: 2025-12-07 09:43:14.14527975 +0000 UTC m=+0.545414437 container died 6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874 (image=quay.io/ceph/ceph:v19, name=recursing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 04:43:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f7c00689fb4c619fba35fa3eaf9cb7b63ec69d369393f6c05228d178af8dea3c-merged.mount: Deactivated successfully.
Dec  7 04:43:14 np0005549474 podman[96966]: 2025-12-07 09:43:14.182116115 +0000 UTC m=+0.582250782 container remove 6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874 (image=quay.io/ceph/ceph:v19, name=recursing_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:43:14 np0005549474 systemd[1]: libpod-conmon-6c38278ec71ea0aac572945a783b2394e97f0030486c76941cf237620bc29874.scope: Deactivated successfully.
Dec  7 04:43:14 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 11 completed events
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:14 np0005549474 ceph-mgr[74811]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  7 04:43:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 44 pg[11.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.qgzqbk on compute-0
Dec  7 04:43:15 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.qgzqbk on compute-0
Dec  7 04:43:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v25: 73 pgs: 3 unknown, 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:43:15 np0005549474 podman[97117]: 2025-12-07 09:43:15.903963722 +0000 UTC m=+0.049517594 container create 4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chaplygin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:15 np0005549474 systemd[1]: Started libpod-conmon-4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7.scope.
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  7 04:43:15 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  7 04:43:15 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 45 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:15 np0005549474 podman[97117]: 2025-12-07 09:43:15.964313915 +0000 UTC m=+0.109867827 container init 4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: Deploying daemon mds.cephfs.compute-0.qgzqbk on compute-0
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 04:43:15 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 04:43:15 np0005549474 podman[97117]: 2025-12-07 09:43:15.978136194 +0000 UTC m=+0.123690076 container start 4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec  7 04:43:15 np0005549474 podman[97117]: 2025-12-07 09:43:15.981996687 +0000 UTC m=+0.127550569 container attach 4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:43:15 np0005549474 modest_chaplygin[97133]: 167 167
Dec  7 04:43:15 np0005549474 podman[97117]: 2025-12-07 09:43:15.888761986 +0000 UTC m=+0.034315888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:15 np0005549474 systemd[1]: libpod-4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7.scope: Deactivated successfully.
Dec  7 04:43:15 np0005549474 podman[97117]: 2025-12-07 09:43:15.985520331 +0000 UTC m=+0.131074223 container died 4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:43:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5406f9d38745e785a9661081b249cc9ea8ac4b06428aa6074cb90d9998cd75e0-merged.mount: Deactivated successfully.
Dec  7 04:43:16 np0005549474 podman[97117]: 2025-12-07 09:43:16.027407491 +0000 UTC m=+0.172961373 container remove 4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:43:16 np0005549474 systemd[1]: libpod-conmon-4581974ae5f1378716f240cb54cc409806d5330fe2113dfeaa1bf875602295b7.scope: Deactivated successfully.
Dec  7 04:43:16 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:16 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:16 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:16 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 new map
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-07T09:43:16:384831+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:42:50.843467+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.rxtsyx{-1:24211} state up:standby seq 1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] up:boot
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] as mds.0
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.rxtsyx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.rxtsyx"} v 0)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rxtsyx"}]: dispatch
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 all = 0
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e4 new map
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-07T09:43:16:410151+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:43:16.410136+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.rxtsyx{0:24211} state up:creating seq 1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:creating}
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.rxtsyx is now active in filesystem cephfs as rank 0
Dec  7 04:43:16 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:16 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:16 np0005549474 systemd[1]: Starting Ceph mds.cephfs.compute-0.qgzqbk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:43:16 np0005549474 podman[97282]: 2025-12-07 09:43:16.883981123 +0000 UTC m=+0.034769700 container create d1f38ded128736fb76dd89ef2909e2ed5648b30bbf53bba3aa01b9b3939768ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-0-qgzqbk, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41e48310b99a1f443c3d7944d09dfad7e0287c965cb427cc6d074ef16d09530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41e48310b99a1f443c3d7944d09dfad7e0287c965cb427cc6d074ef16d09530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41e48310b99a1f443c3d7944d09dfad7e0287c965cb427cc6d074ef16d09530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41e48310b99a1f443c3d7944d09dfad7e0287c965cb427cc6d074ef16d09530/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.qgzqbk supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:16 np0005549474 podman[97282]: 2025-12-07 09:43:16.936441685 +0000 UTC m=+0.087230282 container init d1f38ded128736fb76dd89ef2909e2ed5648b30bbf53bba3aa01b9b3939768ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-0-qgzqbk, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:16 np0005549474 podman[97282]: 2025-12-07 09:43:16.942010734 +0000 UTC m=+0.092799311 container start d1f38ded128736fb76dd89ef2909e2ed5648b30bbf53bba3aa01b9b3939768ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-0-qgzqbk, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:43:16 np0005549474 bash[97282]: d1f38ded128736fb76dd89ef2909e2ed5648b30bbf53bba3aa01b9b3939768ab
Dec  7 04:43:16 np0005549474 podman[97282]: 2025-12-07 09:43:16.868329184 +0000 UTC m=+0.019117781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  7 04:43:16 np0005549474 systemd[1]: Started Ceph mds.cephfs.compute-0.qgzqbk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:16 np0005549474 ceph-mds[97301]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:43:16 np0005549474 ceph-mds[97301]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec  7 04:43:16 np0005549474 ceph-mds[97301]: main not setting numa affinity
Dec  7 04:43:16 np0005549474 ceph-mds[97301]: pidfile_write: ignore empty --pid-file
Dec  7 04:43:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-0-qgzqbk[97297]: starting mds.cephfs.compute-0.qgzqbk at 
Dec  7 04:43:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:43:16 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Updating MDS map to version 4 from mon.0
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.ihigcc on compute-1
Dec  7 04:43:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.ihigcc on compute-1
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e5 new map
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-07T09:43:17:384030+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:43:17.384027+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 04:43:17 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Updating MDS map to version 5 from mon.0
Dec  7 04:43:17 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Monitors have assigned me to become a standby
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] up:active
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] up:boot
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 1 up:standby
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qgzqbk"} v 0)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qgzqbk"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e5 all = 0
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: daemon mds.cephfs.compute-2.rxtsyx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: daemon mds.cephfs.compute-2.rxtsyx is now active in filesystem cephfs as rank 0
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 04:43:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v28: 74 pgs: 1 unknown, 1 creating+peering, 72 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec  7 04:43:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: Deploying daemon mds.cephfs.compute-1.ihigcc on compute-1
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 7ad0535e-5ec1-462b-acbd-3aa08aaa497a (Updating mds.cephfs deployment (+3 -> 3))
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 7ad0535e-5ec1-462b-acbd-3aa08aaa497a (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev ac129f2a-0f92-4556-9830-88d507dfd802 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu-rgw
Dec  7 04:43:18 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu-rgw
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.jddrlu's ganesha conf is defaulting to empty
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.jddrlu's ganesha conf is defaulting to empty
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v31: 74 pgs: 1 unknown, 1 creating+peering, 72 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:43:19 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 12 completed events
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 radosgw[96353]: LDAP not started since no server URIs were provided in the configuration.
Dec  7 04:43:19 np0005549474 radosgw[96353]: v1 topic migration: starting v1 topic migration..
Dec  7 04:43:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-0-kbsleq[96349]: 2025-12-07T09:43:19.614+0000 7f6f21d88980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  7 04:43:19 np0005549474 radosgw[96353]: v1 topic migration: finished v1 topic migration
Dec  7 04:43:19 np0005549474 radosgw[96353]: framework: beast
Dec  7 04:43:19 np0005549474 radosgw[96353]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  7 04:43:19 np0005549474 radosgw[96353]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  7 04:43:19 np0005549474 radosgw[96353]: starting handler: beast
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e6 new map
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-12-07T09:43:19:670484+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:43:17.384027+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ihigcc{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] up:boot
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 2 up:standby
Dec  7 04:43:19 np0005549474 radosgw[96353]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 04:43:19 np0005549474 radosgw[96353]: mgrc service_daemon_register rgw.14589 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.kbsleq,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=15adcc20-c494-4e96-8c9d-9a9668d901cf,zone_name=default,zonegroup_id=33ce195e-0f10-43f1-a319-f93c51bac89f,zonegroup_name=default}
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ihigcc"} v 0)
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ihigcc"}]: dispatch
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e6 all = 0
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 04:43:19 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu-rgw
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: Bind address in nfs.cephfs.0.0.compute-1.jddrlu's ganesha conf is defaulting to empty
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: Deploying daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: Cluster is now healthy
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:20 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.llxakn
Dec  7 04:43:20 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.llxakn
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 04:43:20 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  7 04:43:20 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e7 new map
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-12-07T09:43:21:377693+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:43:20.452323+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ihigcc{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 04:43:21 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Updating MDS map to version 7 from mon.0
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] up:active
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] up:standby
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 2 up:standby
Dec  7 04:43:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v32: 74 pgs: 74 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 9.1 KiB/s wr, 460 op/s
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: Creating key for client.nfs.cephfs.1.0.compute-2.llxakn
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 04:43:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 04:43:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 new map
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-12-07T09:43:23:399497+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T09:42:50.843467+0000#012modified#0112025-12-07T09:43:20.452323+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ihigcc{-1:24293} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] up:standby
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 2 up:standby
Dec  7 04:43:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v33: 74 pgs: 74 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 7.7 KiB/s wr, 389 op/s
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 04:43:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.llxakn-rgw
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.llxakn-rgw
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.llxakn's ganesha conf is defaulting to empty
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.llxakn's ganesha conf is defaulting to empty
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.llxakn on compute-2
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.llxakn on compute-2
Dec  7 04:43:24 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 7bbb61bc-5566-4dcf-9640-f15f46c1414f (Global Recovery Event) in 10 seconds
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:24 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v34: 74 pgs: 74 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 6.2 KiB/s wr, 315 op/s
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: Creating key for client.nfs.cephfs.1.0.compute-2.llxakn-rgw
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: Bind address in nfs.cephfs.1.0.compute-2.llxakn's ganesha conf is defaulting to empty
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: Deploying daemon nfs.cephfs.1.0.compute-2.llxakn on compute-2
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:25 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk
Dec  7 04:43:25 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 04:43:25 np0005549474 ceph-mgr[74811]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  7 04:43:25 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 04:43:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 04:43:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 04:43:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v35: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 6.3 KiB/s wr, 299 op/s
Dec  7 04:43:28 np0005549474 ceph-mon[74516]: Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk
Dec  7 04:43:28 np0005549474 ceph-mon[74516]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  7 04:43:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  7 04:43:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 04:43:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v36: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 5.7 KiB/s wr, 269 op/s
Dec  7 04:43:29 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 13 completed events
Dec  7 04:43:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:43:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.bjrqrk's ganesha conf is defaulting to empty
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.bjrqrk's ganesha conf is defaulting to empty
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:43:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.bjrqrk on compute-0
Dec  7 04:43:30 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.bjrqrk on compute-0
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:31.019842426 +0000 UTC m=+0.075998022 container create c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:30.969378187 +0000 UTC m=+0.025533853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:31 np0005549474 systemd[1]: Started libpod-conmon-c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f.scope.
Dec  7 04:43:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:31.114771573 +0000 UTC m=+0.170927179 container init c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:31.123078385 +0000 UTC m=+0.179233971 container start c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_saha, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:31.128287304 +0000 UTC m=+0.184442920 container attach c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:43:31 np0005549474 goofy_saha[97568]: 167 167
Dec  7 04:43:31 np0005549474 systemd[1]: libpod-c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f.scope: Deactivated successfully.
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:31.129814325 +0000 UTC m=+0.185969931 container died c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_saha, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 04:43:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-84dd7be552d01f1d0936208955f29db71dfff6ae47d48024466b6359b263754e-merged.mount: Deactivated successfully.
Dec  7 04:43:31 np0005549474 podman[97552]: 2025-12-07 09:43:31.169959088 +0000 UTC m=+0.226114684 container remove c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_saha, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:43:31 np0005549474 systemd[1]: libpod-conmon-c7119ee9df35247463bde3ee25bd72883b0d39cfddbf73f32b2a272861a30b6f.scope: Deactivated successfully.
Dec  7 04:43:31 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:31 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:31 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v37: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 5.6 KiB/s wr, 229 op/s
Dec  7 04:43:31 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: Rados config object exists: conf-nfs.cephfs
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: Bind address in nfs.cephfs.2.0.compute-0.bjrqrk's ganesha conf is defaulting to empty
Dec  7 04:43:31 np0005549474 ceph-mon[74516]: Deploying daemon nfs.cephfs.2.0.compute-0.bjrqrk on compute-0
Dec  7 04:43:31 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:31 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:31 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:43:32 np0005549474 podman[97712]: 2025-12-07 09:43:32.026326464 +0000 UTC m=+0.089352929 container create 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:32 np0005549474 podman[97712]: 2025-12-07 09:43:31.962324474 +0000 UTC m=+0.025350949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:43:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649542f307cced57b8fe0950304d5c93960602a94da0e7f2895f956ca8bfb913/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649542f307cced57b8fe0950304d5c93960602a94da0e7f2895f956ca8bfb913/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649542f307cced57b8fe0950304d5c93960602a94da0e7f2895f956ca8bfb913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649542f307cced57b8fe0950304d5c93960602a94da0e7f2895f956ca8bfb913/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:32 np0005549474 podman[97712]: 2025-12-07 09:43:32.090046287 +0000 UTC m=+0.153072822 container init 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:43:32 np0005549474 podman[97712]: 2025-12-07 09:43:32.103268641 +0000 UTC m=+0.166295106 container start 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:43:32 np0005549474 bash[97712]: 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:43:32 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev ac129f2a-0f92-4556-9830-88d507dfd802 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  7 04:43:32 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event ac129f2a-0f92-4556-9830-88d507dfd802 (Updating nfs.cephfs deployment (+3 -> 3)) in 13 seconds
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 10504ca7-10c5-4792-8fad-3f02e4ed2b43 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.kwciua on compute-1
Dec  7 04:43:32 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.kwciua on compute-1
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:43:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:43:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v38: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Dec  7 04:43:33 np0005549474 ceph-mon[74516]: Deploying daemon haproxy.nfs.cephfs.compute-1.kwciua on compute-1
Dec  7 04:43:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v39: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Dec  7 04:43:35 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 14 completed events
Dec  7 04:43:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:43:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:43:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:36 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.ieiboq on compute-0
Dec  7 04:43:36 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.ieiboq on compute-0
Dec  7 04:43:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v40: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.4 KiB/s wr, 23 op/s
Dec  7 04:43:37 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:37 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:37 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:38 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac8000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:38 np0005549474 ceph-mon[74516]: Deploying daemon haproxy.nfs.cephfs.compute-0.ieiboq on compute-0
Dec  7 04:43:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v41: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.74056778 +0000 UTC m=+2.327914645 container create 5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606 (image=quay.io/ceph/haproxy:2.3, name=tender_wing)
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.724969344 +0000 UTC m=+2.312316209 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 04:43:39 np0005549474 systemd[1]: Started libpod-conmon-5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606.scope.
Dec  7 04:43:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.830579885 +0000 UTC m=+2.417926820 container init 5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606 (image=quay.io/ceph/haproxy:2.3, name=tender_wing)
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.841277911 +0000 UTC m=+2.428624776 container start 5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606 (image=quay.io/ceph/haproxy:2.3, name=tender_wing)
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.844780965 +0000 UTC m=+2.432127920 container attach 5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606 (image=quay.io/ceph/haproxy:2.3, name=tender_wing)
Dec  7 04:43:39 np0005549474 tender_wing[97991]: 0 0
Dec  7 04:43:39 np0005549474 systemd[1]: libpod-5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606.scope: Deactivated successfully.
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.846146402 +0000 UTC m=+2.433493267 container died 5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606 (image=quay.io/ceph/haproxy:2.3, name=tender_wing)
Dec  7 04:43:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5c9358341bdcc77359819fb914255cb28120d73a79f7d2526841c3be7b7d7eb1-merged.mount: Deactivated successfully.
Dec  7 04:43:39 np0005549474 podman[97875]: 2025-12-07 09:43:39.889823109 +0000 UTC m=+2.477169974 container remove 5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606 (image=quay.io/ceph/haproxy:2.3, name=tender_wing)
Dec  7 04:43:39 np0005549474 systemd[1]: libpod-conmon-5e5c55de078b284dd64b9431e406073981391f549f129f2388b1765ffc7be606.scope: Deactivated successfully.
Dec  7 04:43:39 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:40 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0014d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:40 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:40 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:40 np0005549474 systemd[1]: Reloading.
Dec  7 04:43:40 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:43:40 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:43:40 np0005549474 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.ieiboq for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:43:40 np0005549474 podman[98136]: 2025-12-07 09:43:40.926180466 +0000 UTC m=+0.070490502 container create e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:43:40 np0005549474 podman[98136]: 2025-12-07 09:43:40.892939983 +0000 UTC m=+0.037250069 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 04:43:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3ed2a3c1244f6d428bf0b85400fa7978e9eed6b8c151a1aed91ebccaf3629d/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:41 np0005549474 podman[98136]: 2025-12-07 09:43:41.011654258 +0000 UTC m=+0.155964354 container init e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:43:41 np0005549474 podman[98136]: 2025-12-07 09:43:41.02185524 +0000 UTC m=+0.166165276 container start e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:43:41 np0005549474 bash[98136]: e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43
Dec  7 04:43:41 np0005549474 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.ieiboq for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:43:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [NOTICE] 340/094341 (2) : New worker #1 (4) forked
Dec  7 04:43:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:43:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:43:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:43:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:41 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.lkwxww on compute-2
Dec  7 04:43:41 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.lkwxww on compute-2
Dec  7 04:43:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v42: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  7 04:43:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:42 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0014d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:42 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:42 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:42 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:42 np0005549474 ceph-mon[74516]: Deploying daemon haproxy.nfs.cephfs.compute-2.lkwxww on compute-2
Dec  7 04:43:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:42 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v43: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  7 04:43:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:44 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:44 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v44: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  7 04:43:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:43:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:46 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.yjewfr on compute-2
Dec  7 04:43:46 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.yjewfr on compute-2
Dec  7 04:43:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:46 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:47 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:47 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:47 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:47 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:47 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:47 np0005549474 ceph-mon[74516]: Deploying daemon keepalived.nfs.cephfs.compute-2.yjewfr on compute-2
Dec  7 04:43:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v45: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  7 04:43:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:48 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:48 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:49 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v46: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:43:49
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.nfs', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data']
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  7 04:43:49 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 91c7f431-52b4-45d7-9326-41b990acb594 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:50 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:50 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  7 04:43:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:51 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev f4bb1378-c1dd-49cd-a3ba-56f5399d6bd6 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.gawwbe on compute-1
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.gawwbe on compute-1
Dec  7 04:43:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v49: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  7 04:43:52 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev f83fbaae-78f5-4d03-8e9f-6b38f6aad341 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: Deploying daemon keepalived.nfs.cephfs.compute-1.gawwbe on compute-1
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:52 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 51 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=51 pruub=12.335571289s) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active pruub 201.375808716s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:43:52 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 51 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=51 pruub=12.335571289s) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown pruub 201.375808716s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:52 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:52 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:53 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  7 04:43:53 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 5b0b7203-ecc0-481d-8fc2-b35587d759be (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.17( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.16( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1d( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.b( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.19( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.3( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.6( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.c( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.15( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1f( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1e( empty local-lis/les=19/20 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.17( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.16( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1d( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.b( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.19( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.3( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.6( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.0( empty local-lis/les=51/52 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.c( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1f( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.15( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1e( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=19/19 les/c/f=20/20/0 sis=51) [0] r=0 lpr=51 pi=[19,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Dec  7 04:43:53 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Dec  7 04:43:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v52: 136 pgs: 62 unknown, 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec  7 04:43:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  7 04:43:54 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 7cbea9fa-d034-41a5-a3f2-fb3db0f48cbb (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:54 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 53 pg[6.0( v 46'39 (0'0,46'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=53 pruub=11.028326035s) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 45'38 mlcod 45'38 active pruub 202.055389404s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:43:54 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 53 pg[6.0( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=53 pruub=11.028326035s) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 45'38 mlcod 0'0 unknown pruub 202.055389404s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:54 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  7 04:43:54 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec  7 04:43:54 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec  7 04:43:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:54 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:54 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:55 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  7 04:43:55 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 29d72cb5-e6ea-437b-953d-678870ae8036 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.a( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.e( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.5( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.3( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.2( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.7( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.1( v 46'39 (0'0,46'39] local-lis/les=21/22 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.4( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.d( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.f( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.6( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.b( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.9( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.8( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.c( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=21/22 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.3( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.2( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.a( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.0( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 45'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.1( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.7( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.5( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.d( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.f( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.9( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.6( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.8( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.4( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.b( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 54 pg[6.c( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=21/21 les/c/f=22/22/0 sis=53) [0] r=0 lpr=53 pi=[21,53)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  7 04:43:55 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  7 04:43:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v55: 182 pgs: 108 unknown, 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:55 np0005549474 ceph-mgr[74811]: [progress WARNING root] Starting Global Recovery Event,108 pgs not in active + clean state
Dec  7 04:43:56 np0005549474 python3[98193]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:43:56 np0005549474 podman[98194]: 2025-12-07 09:43:56.076510412 +0000 UTC m=+0.054265467 container create 8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:43:56 np0005549474 systemd[1]: Started libpod-conmon-8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2.scope.
Dec  7 04:43:56 np0005549474 podman[98194]: 2025-12-07 09:43:56.050709763 +0000 UTC m=+0.028464878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e45972c459bbafcfd3a9595333fb602fd8888d0967b50f1a5776189c828755/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e45972c459bbafcfd3a9595333fb602fd8888d0967b50f1a5776189c828755/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:56 np0005549474 podman[98194]: 2025-12-07 09:43:56.16983982 +0000 UTC m=+0.147594925 container init 8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  7 04:43:56 np0005549474 podman[98194]: 2025-12-07 09:43:56.182128852 +0000 UTC m=+0.159883877 container start 8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:43:56 np0005549474 podman[98194]: 2025-12-07 09:43:56.18556548 +0000 UTC m=+0.163320535 container attach 8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 55 pg[9.0( v 41'9 (0'0,41'9] local-lis/les=40/41 n=6 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=55 pruub=11.643532753s) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 41'8 mlcod 41'8 active pruub 204.767578125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 55 pg[8.0( v 48'45 (0'0,48'45] local-lis/les=37/38 n=5 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=55 pruub=10.094965935s) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 48'44 mlcod 48'44 active pruub 203.219223022s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 55 pg[8.0( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=55 pruub=10.094965935s) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 48'44 mlcod 0'0 unknown pruub 203.219223022s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 55 pg[9.0( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=55 pruub=11.643532753s) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 41'8 mlcod 0'0 unknown pruub 204.767578125s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c4e4ef4900) operator()   moving buffer(0x55c4e5d32ca8 space 0x55c4e5cd6aa0 0x0~1000 clean)
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55c4e60d2900) operator()   moving buffer(0x55c4e5ce00c8 space 0x55c4e5cd6900 0x0~1000 clean)
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c4e4ef4900) operator()   moving buffer(0x55c4e5ce1e28 space 0x55c4e5959c80 0x0~1000 clean)
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c4e4ef4900) operator()   moving buffer(0x55c4e5ce1928 space 0x55c4e5b98f80 0x0~1000 clean)
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c4e4ef4900) operator()   moving buffer(0x55c4e5d65b08 space 0x55c4e5cd6760 0x0~1000 clean)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev e9817854-b4b7-411e-a986-a7ed3dddf86a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:43:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.vqhjze on compute-0
Dec  7 04:43:56 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.vqhjze on compute-0
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  7 04:43:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  7 04:43:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:56 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:56 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:57 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: Deploying daemon keepalived.nfs.cephfs.compute-0.vqhjze on compute-0
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  7 04:43:57 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 7b960e40-b718-4c1e-a6e5-2decfabe73c8 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.11( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.11( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.10( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.10( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.17( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.16( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.17( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.16( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1a( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1b( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1b( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1a( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.19( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.18( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1e( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1f( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1f( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1e( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1c( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1d( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.2( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.3( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.7( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.6( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.6( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.7( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.5( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.4( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.8( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.9( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1( v 48'45 (0'0,48'45] local-lis/les=37/38 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.15( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.14( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.15( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.14( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.2( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.3( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.f( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.e( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.8( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.9( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.a( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.b( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.e( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.f( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.d( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.c( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.c( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.b( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.a( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.d( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1( v 41'9 (0'0,41'9] local-lis/les=40/41 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.4( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.5( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.19( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.18( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1d( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1c( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.13( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.12( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.12( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.13( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=40/41 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.10( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.11( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.10( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.11( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.16( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.16( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1a( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1b( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1a( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1e( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1f( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1f( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.18( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1e( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1c( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1d( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.17( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.19( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.3( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.2( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1b( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.7( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.6( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.6( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.7( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.4( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.0( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 41'8 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.9( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.15( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.15( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.14( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.2( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.3( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.5( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.f( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.e( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.8( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.a( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.b( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.e( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.f( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.9( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.d( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.c( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.c( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.0( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 48'44 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.d( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.b( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.a( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.18( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.1d( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.13( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.5( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.12( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[9.13( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=40/40 les/c/f=41/41/0 sis=55) [0] r=0 lpr=55 pi=[40,55)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 56 pg[8.1c( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  7 04:43:57 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:43:57 np0005549474 gifted_einstein[98209]: could not fetch user info: no user info saved
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.414679) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637414813, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6743, "num_deletes": 250, "total_data_size": 13250344, "memory_usage": 14081888, "flush_reason": "Manual Compaction"}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  7 04:43:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v58: 244 pgs: 62 unknown, 182 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637487269, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11889316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 6880, "table_properties": {"data_size": 11865186, "index_size": 15376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 72976, "raw_average_key_size": 23, "raw_value_size": 11806328, "raw_average_value_size": 3863, "num_data_blocks": 683, "num_entries": 3056, "num_filter_entries": 3056, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100349, "oldest_key_time": 1765100349, "file_creation_time": 1765100637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 72622 microseconds, and 28670 cpu microseconds.
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.487326) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11889316 bytes OK
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.487348) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.489243) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.489259) EVENT_LOG_v1 {"time_micros": 1765100637489254, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.489277) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13220185, prev total WAL file size 13220185, number of live WAL files 2.
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.491887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637491986, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11949754, "oldest_snapshot_seqno": -1}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2879 keys, 11931953 bytes, temperature: kUnknown
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637590038, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11931953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11908094, "index_size": 15558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7237, "raw_key_size": 71972, "raw_average_key_size": 24, "raw_value_size": 11850673, "raw_average_value_size": 4116, "num_data_blocks": 690, "num_entries": 2879, "num_filter_entries": 2879, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765100637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.590281) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11931953 bytes
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.595580) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.8 rd, 121.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.4, 0.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3165, records dropped: 286 output_compression: NoCompression
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.595617) EVENT_LOG_v1 {"time_micros": 1765100637595592, "job": 4, "event": "compaction_finished", "compaction_time_micros": 98124, "compaction_time_cpu_micros": 22731, "output_level": 6, "num_output_files": 1, "total_output_size": 11931953, "num_input_records": 3165, "num_output_records": 2879, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637597293, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637597345, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637597376, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  7 04:43:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:43:57.491774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:43:57 np0005549474 systemd[1]: libpod-8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2.scope: Deactivated successfully.
Dec  7 04:43:57 np0005549474 conmon[98209]: conmon 8a66ffe065602e4ce0aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2.scope/container/memory.events
Dec  7 04:43:57 np0005549474 podman[98420]: 2025-12-07 09:43:57.651483935 +0000 UTC m=+0.024085663 container died 8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:43:57 np0005549474 systemd[1]: var-lib-containers-storage-overlay-75e45972c459bbafcfd3a9595333fb602fd8888d0967b50f1a5776189c828755-merged.mount: Deactivated successfully.
Dec  7 04:43:57 np0005549474 podman[98420]: 2025-12-07 09:43:57.713894294 +0000 UTC m=+0.086495992 container remove 8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:43:57 np0005549474 systemd[1]: libpod-conmon-8a66ffe065602e4ce0aaffc0c0523a5daf866f515fa42965f8d767a2717d0cb2.scope: Deactivated successfully.
Dec  7 04:43:58 np0005549474 python3[98457]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:58 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  7 04:43:58 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 8b669e56-a7d8-444e-a60e-0e6c573b084b (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 91c7f431-52b4-45d7-9326-41b990acb594 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 91c7f431-52b4-45d7-9326-41b990acb594 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev f4bb1378-c1dd-49cd-a3ba-56f5399d6bd6 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event f4bb1378-c1dd-49cd-a3ba-56f5399d6bd6 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev f83fbaae-78f5-4d03-8e9f-6b38f6aad341 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event f83fbaae-78f5-4d03-8e9f-6b38f6aad341 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 5b0b7203-ecc0-481d-8fc2-b35587d759be (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 5b0b7203-ecc0-481d-8fc2-b35587d759be (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 7cbea9fa-d034-41a5-a3f2-fb3db0f48cbb (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 7cbea9fa-d034-41a5-a3f2-fb3db0f48cbb (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 29d72cb5-e6ea-437b-953d-678870ae8036 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 29d72cb5-e6ea-437b-953d-678870ae8036 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec  7 04:43:58 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev e9817854-b4b7-411e-a986-a7ed3dddf86a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event e9817854-b4b7-411e-a986-a7ed3dddf86a (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 7b960e40-b718-4c1e-a6e5-2decfabe73c8 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 7b960e40-b718-4c1e-a6e5-2decfabe73c8 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 8b669e56-a7d8-444e-a60e-0e6c573b084b (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  7 04:43:58 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 8b669e56-a7d8-444e-a60e-0e6c573b084b (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec  7 04:43:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:58 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:58 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 57 pg[11.0( v 56'72 (0'0,56'72] local-lis/les=44/45 n=8 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=13.569743156s) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 56'71 mlcod 56'71 active pruub 208.811141968s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:43:58 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 57 pg[11.0( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=13.569743156s) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 56'71 mlcod 0'0 unknown pruub 208.811141968s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:58 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:43:59 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.12( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.13( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.14( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.15( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.18( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1b( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1c( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1d( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1e( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1( v 56'72 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.19( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.4( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.5( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.6( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.b( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.17( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.2( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.16( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.c( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.a( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.9( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.d( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.e( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.f( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.8( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.3( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.7( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1a( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1f( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.10( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.11( v 56'72 lc 0'0 (0'0,56'72] local-lis/les=44/45 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.13( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.12( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.14( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.15( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.18( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1c( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1d( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1b( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.19( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.4( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.6( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.5( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.b( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.17( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.0( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 56'71 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.2( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.16( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.c( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1e( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.9( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.a( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.d( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.e( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.f( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.8( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.7( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.3( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1f( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.1a( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.11( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 58 pg[11.10( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:43:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:43:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v61: 306 pgs: 124 unknown, 182 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 04:43:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:43:59 np0005549474 podman[98474]: 2025-12-07 09:43:59.725455988 +0000 UTC m=+1.663284816 container create 8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93 (image=quay.io/ceph/ceph:v19, name=tender_beaver, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.767328199 +0000 UTC m=+2.913078261 container create a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b (image=quay.io/ceph/keepalived:2.2.4, name=agitated_robinson, distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, release=1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec  7 04:43:59 np0005549474 systemd[1]: Started libpod-conmon-8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93.scope.
Dec  7 04:43:59 np0005549474 systemd[1]: Started libpod-conmon-a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b.scope.
Dec  7 04:43:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:59 np0005549474 podman[98474]: 2025-12-07 09:43:59.704850717 +0000 UTC m=+1.642679555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 04:43:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87e4760b9583e112eeebe80560a2af88ff497900c566abb1c4c4e68adbeff7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87e4760b9583e112eeebe80560a2af88ff497900c566abb1c4c4e68adbeff7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.751958378 +0000 UTC m=+2.897708460 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 04:43:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:43:59 np0005549474 podman[98474]: 2025-12-07 09:43:59.805366 +0000 UTC m=+1.743194848 container init 8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93 (image=quay.io/ceph/ceph:v19, name=tender_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:43:59 np0005549474 podman[98474]: 2025-12-07 09:43:59.81163939 +0000 UTC m=+1.749468218 container start 8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93 (image=quay.io/ceph/ceph:v19, name=tender_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.814741749 +0000 UTC m=+2.960491821 container init a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b (image=quay.io/ceph/keepalived:2.2.4, name=agitated_robinson, vendor=Red Hat, Inc., release=1793, distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, version=2.2.4)
Dec  7 04:43:59 np0005549474 podman[98474]: 2025-12-07 09:43:59.817881329 +0000 UTC m=+1.755710187 container attach 8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93 (image=quay.io/ceph/ceph:v19, name=tender_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.819104424 +0000 UTC m=+2.964854486 container start a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b (image=quay.io/ceph/keepalived:2.2.4, name=agitated_robinson, release=1793, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.821663587 +0000 UTC m=+2.967413649 container attach a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b (image=quay.io/ceph/keepalived:2.2.4, name=agitated_robinson, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-type=git, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9)
Dec  7 04:43:59 np0005549474 agitated_robinson[98540]: 0 0
Dec  7 04:43:59 np0005549474 systemd[1]: libpod-a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b.scope: Deactivated successfully.
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.823429358 +0000 UTC m=+2.969179420 container died a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b (image=quay.io/ceph/keepalived:2.2.4, name=agitated_robinson, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, architecture=x86_64)
Dec  7 04:43:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d9fcf3ba91a77235dc535908dbb88dfee24ddfa01aaa736b6f341a4395ab2720-merged.mount: Deactivated successfully.
Dec  7 04:43:59 np0005549474 podman[98379]: 2025-12-07 09:43:59.858879995 +0000 UTC m=+3.004630057 container remove a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b (image=quay.io/ceph/keepalived:2.2.4, name=agitated_robinson, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Dec  7 04:43:59 np0005549474 systemd[1]: libpod-conmon-a9f9cd66e2db4f279a373971cbb1acd3a7aa5cd9d8fd6e57df833d4929a2d12b.scope: Deactivated successfully.
Dec  7 04:44:00 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:00 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:00 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  7 04:44:00 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  7 04:44:00 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:00 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:00 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:00 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 23 completed events
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:00 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:00 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:00 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:00 np0005549474 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.vqhjze for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:00 np0005549474 podman[98770]: 2025-12-07 09:44:00.994380952 +0000 UTC m=+0.040855833 container create 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public)
Dec  7 04:44:01 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d3b3822a7b42a611e50df5f7a81e11b8fcde6fadd660202ba54c3f4728182d/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:01 np0005549474 podman[98770]: 2025-12-07 09:44:01.054096974 +0000 UTC m=+0.100571845 container init 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793)
Dec  7 04:44:01 np0005549474 podman[98770]: 2025-12-07 09:44:01.058330256 +0000 UTC m=+0.104805137 container start 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  7 04:44:01 np0005549474 bash[98770]: 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c
Dec  7 04:44:01 np0005549474 podman[98770]: 2025-12-07 09:44:00.978122056 +0000 UTC m=+0.024596917 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 04:44:01 np0005549474 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.vqhjze for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: Starting VRRP child process, pid=4
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: Startup complete
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: (VI_0) Entering BACKUP STATE (init)
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:01 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:01 2025: VRRP_Script(check_backend) succeeded
Dec  7 04:44:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:01 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  7 04:44:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  7 04:44:01 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  7 04:44:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 246 B/s wr, 5 op/s
Dec  7 04:44:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:02 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  7 04:44:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:02 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:02 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffabc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:02 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  7 04:44:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  7 04:44:02 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 04:44:02 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:03 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  7 04:44:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:44:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:03 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 10504ca7-10c5-4792-8fad-3f02e4ed2b43 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 10504ca7-10c5-4792-8fad-3f02e4ed2b43 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 31 seconds
Dec  7 04:44:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 04:44:03 np0005549474 tender_beaver[98538]: {
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "user_id": "openstack",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "display_name": "openstack",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "email": "",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "suspended": 0,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "max_buckets": 1000,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "subusers": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "keys": [
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        {
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:            "user": "openstack",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:            "access_key": "0HB9Y490933IT7CRT0KA",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:            "secret_key": "Y7lCzAUscGAksQjlzpl55pntSjBj4a8sO12HjF7U",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:            "active": true,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:            "create_date": "2025-12-07T09:44:00.186136Z"
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        }
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    ],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "swift_keys": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "caps": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "op_mask": "read, write, delete",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "default_placement": "",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "default_storage_class": "",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "placement_tags": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "bucket_quota": {
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "enabled": false,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "check_on_raw": false,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "max_size": -1,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "max_size_kb": 0,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "max_objects": -1
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    },
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "user_quota": {
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "enabled": false,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "check_on_raw": false,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "max_size": -1,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "max_size_kb": 0,
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:        "max_objects": -1
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    },
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "temp_url_keys": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "type": "rgw",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "mfa_ids": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "account_id": "",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "path": "/",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "create_date": "2025-12-07T09:44:00.185236Z",
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "tags": [],
Dec  7 04:44:03 np0005549474 tender_beaver[98538]:    "group_ids": []
Dec  7 04:44:03 np0005549474 tender_beaver[98538]: }
Dec  7 04:44:03 np0005549474 tender_beaver[98538]: 
Dec  7 04:44:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 89e42c06-646b-411a-9988-769820a07c08 (Updating alertmanager deployment (+1 -> 1))
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec  7 04:44:03 np0005549474 systemd[1]: libpod-8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93.scope: Deactivated successfully.
Dec  7 04:44:03 np0005549474 podman[98474]: 2025-12-07 09:44:03.160963411 +0000 UTC m=+5.098792259 container died 8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93 (image=quay.io/ceph/ceph:v19, name=tender_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:44:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c87e4760b9583e112eeebe80560a2af88ff497900c566abb1c4c4e68adbeff7f-merged.mount: Deactivated successfully.
Dec  7 04:44:03 np0005549474 podman[98474]: 2025-12-07 09:44:03.216153094 +0000 UTC m=+5.153981922 container remove 8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93 (image=quay.io/ceph/ceph:v19, name=tender_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:44:03 np0005549474 systemd[1]: libpod-conmon-8fe0abd5a40c8aaed7d1eb1b22311ee941d356c883d3bc401b076f7b248bff93.scope: Deactivated successfully.
Dec  7 04:44:03 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  7 04:44:03 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 199 B/s wr, 4 op/s
Dec  7 04:44:03 np0005549474 python3[98933]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:44:03 np0005549474 ceph-mgr[74811]: [dashboard INFO request] [192.168.122.100:34224] [GET] [200] [0.113s] [6.3K] [e81d7761-0831-4a2b-b9a2-e6b380104701] /
Dec  7 04:44:04 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  7 04:44:04 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  7 04:44:04 np0005549474 python3[98959]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:44:04 np0005549474 ceph-mgr[74811]: [dashboard INFO request] [192.168.122.100:34240] [GET] [200] [0.002s] [6.3K] [2aa274c6-1652-41a2-aa43-3ba9a52a8f9a] /
Dec  7 04:44:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:04 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:04 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:04 np0005549474 ceph-mon[74516]: Deploying daemon alertmanager.compute-0 on compute-0
Dec  7 04:44:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:04 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:04 2025: (VI_0) Entering MASTER STATE
Dec  7 04:44:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:05 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac4001110 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:05 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  7 04:44:05 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  7 04:44:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 167 B/s wr, 3 op/s
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.546615844 +0000 UTC m=+1.923654193 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.560528833 +0000 UTC m=+1.937567162 volume create e90d3ded5815ece9be326d23942b936cdf7b8c4bba07aa0df06cb1e8ef409da6
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.567219945 +0000 UTC m=+1.944258274 container create a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 systemd[1]: Started libpod-conmon-a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd.scope.
Dec  7 04:44:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eef1480cf4fc7ceba1ba538776a5873a00ba01f62ac15c890bb6c24c3c302ed/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.623727745 +0000 UTC m=+2.000766084 container init a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.633210718 +0000 UTC m=+2.010249047 container start a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.636486552 +0000 UTC m=+2.013524891 container attach a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 blissful_mcclintock[99082]: 65534 65534
Dec  7 04:44:05 np0005549474 systemd[1]: libpod-a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd.scope: Deactivated successfully.
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.63958475 +0000 UTC m=+2.016623099 container died a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9eef1480cf4fc7ceba1ba538776a5873a00ba01f62ac15c890bb6c24c3c302ed-merged.mount: Deactivated successfully.
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.676999714 +0000 UTC m=+2.054038063 container remove a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 24 completed events
Dec  7 04:44:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:05 np0005549474 podman[98898]: 2025-12-07 09:44:05.681682558 +0000 UTC m=+2.058720897 volume remove e90d3ded5815ece9be326d23942b936cdf7b8c4bba07aa0df06cb1e8ef409da6
Dec  7 04:44:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:05 np0005549474 systemd[1]: libpod-conmon-a347d865e9f1f22bff05b6eb500f97307151b08bd98f14e7b20e3c482fd2e5fd.scope: Deactivated successfully.
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.734574725 +0000 UTC m=+0.031527435 volume create e71a1e6dc7e22170845092ea9fc1de34a7c1eb1ff044e17b955bf5dbc13af03f
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.740494064 +0000 UTC m=+0.037446774 container create e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241 (image=quay.io/prometheus/alertmanager:v0.25.0, name=charming_wiles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 systemd[1]: Started libpod-conmon-e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241.scope.
Dec  7 04:44:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:05 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa05a61e8b7abe2368e22db135c9207791cd597247c5805deddb43d47337ece/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.722695284 +0000 UTC m=+0.019648014 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.826258035 +0000 UTC m=+0.123210765 container init e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241 (image=quay.io/prometheus/alertmanager:v0.25.0, name=charming_wiles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.835590412 +0000 UTC m=+0.132543122 container start e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241 (image=quay.io/prometheus/alertmanager:v0.25.0, name=charming_wiles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 charming_wiles[99116]: 65534 65534
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.83794562 +0000 UTC m=+0.134898330 container attach e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241 (image=quay.io/prometheus/alertmanager:v0.25.0, name=charming_wiles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 systemd[1]: libpod-e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241.scope: Deactivated successfully.
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.848699418 +0000 UTC m=+0.145652118 container died e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241 (image=quay.io/prometheus/alertmanager:v0.25.0, name=charming_wiles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-daa05a61e8b7abe2368e22db135c9207791cd597247c5805deddb43d47337ece-merged.mount: Deactivated successfully.
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.891718002 +0000 UTC m=+0.188670712 container remove e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241 (image=quay.io/prometheus/alertmanager:v0.25.0, name=charming_wiles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:05 np0005549474 podman[99099]: 2025-12-07 09:44:05.894590144 +0000 UTC m=+0.191542854 volume remove e71a1e6dc7e22170845092ea9fc1de34a7c1eb1ff044e17b955bf5dbc13af03f
Dec  7 04:44:05 np0005549474 systemd[1]: libpod-conmon-e99931058e1d2db8f073ef92cfd0ecfd12ff5e0020dde835dbf6121c33454241.scope: Deactivated successfully.
Dec  7 04:44:05 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:06 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:06 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:06 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:06 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  7 04:44:06 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  7 04:44:06 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:06 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:06 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:06 np0005549474 systemd[1]: Starting Ceph alertmanager.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:06 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:06 np0005549474 podman[99255]: 2025-12-07 09:44:06.734280197 +0000 UTC m=+0.024563266 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:07 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  7 04:44:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 255 B/s wr, 3 op/s
Dec  7 04:44:07 np0005549474 podman[99255]: 2025-12-07 09:44:07.82046727 +0000 UTC m=+1.110750339 volume create bbecabe2ba68b9677e7fcee6c79ccd56f46ed5ff3a150457392e6e72cec26188
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:07 np0005549474 podman[99255]: 2025-12-07 09:44:07.834639566 +0000 UTC m=+1.124922615 container create 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b1fdc19db2ff3d8cc6e33fb714d072d9de45e71e4f4e2f60ae8a4ffe5b3929/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b1fdc19db2ff3d8cc6e33fb714d072d9de45e71e4f4e2f60ae8a4ffe5b3929/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  7 04:44:07 np0005549474 podman[99255]: 2025-12-07 09:44:07.896592073 +0000 UTC m=+1.186875132 container init 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:07 np0005549474 podman[99255]: 2025-12-07 09:44:07.902391809 +0000 UTC m=+1.192674858 container start 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:07 np0005549474 bash[99255]: 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  7 04:44:07 np0005549474 systemd[1]: Started Ceph alertmanager.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:07.934Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:07.934Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.1e( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.13( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.14( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.b( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.f( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.6( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.4( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.8( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.3( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.2( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.3( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.6( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.b( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.19( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.10( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.1b( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.c( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.e( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.5( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.c( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.9( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.a( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.8( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.6( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.19( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.10( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.1c( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.17( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[12.12( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.18( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[5.1d( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[7.1e( empty local-lis/les=0/0 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.12( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.405574799s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.196807861s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.12( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.405552864s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.196807861s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1d( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.230230331s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021575928s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1d( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.230221748s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021575928s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.11( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.365837097s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.157257080s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.11( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.365826607s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157257080s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.10( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.365734100s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.157241821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.10( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.365725517s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157241821s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1c( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.222806931s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.014404297s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1c( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.222798347s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.014404297s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.13( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.405074120s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.196792603s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.13( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.405036926s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.196792603s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.10( v 59'48 (0'0,59'48] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.365102768s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=56'46 lcod 59'47 mlcod 59'47 active pruub 218.157226562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.10( v 59'48 (0'0,59'48] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.365057945s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=56'46 lcod 59'47 mlcod 0'0 unknown NOTIFY pruub 218.157226562s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1b( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.229681015s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022094727s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364745140s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.157226562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1b( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.229626656s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022094727s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364724159s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157226562s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.16( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364652634s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.157333374s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.16( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364621162s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157333374s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.11( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364484787s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.157226562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.11( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364464760s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157226562s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.16( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364485741s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.157348633s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.17( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.370869637s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.163772583s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.16( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364458084s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157348633s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.17( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.370852470s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.163772583s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.19( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403992653s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197128296s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.1b( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364288330s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.157440186s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.19( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403975487s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197128296s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.1b( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.364270210s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.157440186s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1b( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403708458s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197036743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.14( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.228301048s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021652222s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1b( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403690338s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197036743s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.14( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.228282928s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021652222s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.18( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.370162964s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.163665771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.18( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.370147705s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.163665771s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1c( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403464317s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197097778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1c( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403450966s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197097778s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.13( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.227864265s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021560669s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.13( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.227847099s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021560669s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1d( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403210640s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197036743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.1f( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369795799s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.163635254s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1d( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.403195381s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197036743s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.1f( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369775772s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.163635254s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1e( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402992249s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197067261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1e( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402976990s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197067261s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402971268s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197113037s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402957916s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197113037s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.e( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.227515221s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021728516s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.e( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.227499962s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021728516s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.2( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369519234s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.163833618s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.2( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369503021s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.163833618s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.3( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369445801s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.163818359s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.3( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369422913s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.163818359s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.4( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402694702s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197143555s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.4( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402683258s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197143555s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.9( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.255656242s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.050186157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.9( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.255641937s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.050186157s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.6( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369489670s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164062500s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.6( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369469643s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164062500s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.5( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402514458s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197189331s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.5( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.402500153s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197189331s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1a( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226748466s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021560669s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.a( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226918221s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021728516s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1a( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226727486s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021560669s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.a( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226899147s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021728516s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.6( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.369022369s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.163986206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.7( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368997574s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164001465s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.6( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368988991s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.163986206s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.7( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368984222s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164001465s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.9( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226670265s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021835327s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.b( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.255125046s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.050308228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.9( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226651192s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021835327s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.b( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.255111694s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.050308228s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.5( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368939400s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164245605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.5( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368926048s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164245605s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.14( v 60'78 (0'0,60'78] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.401467323s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'78 lcod 59'77 mlcod 59'77 active pruub 220.196853638s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368607521s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164016724s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368595123s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164016724s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.9( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368649483s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164108276s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.9( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368634224s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164108276s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.14( v 60'78 (0'0,60'78] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.401398659s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'78 lcod 59'77 mlcod 0'0 unknown NOTIFY pruub 220.196853638s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.d( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226281166s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021820068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.d( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226269722s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021820068s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.f( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.253907204s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.049530029s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.f( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.253891945s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.049530029s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.17( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.401625633s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197311401s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.17( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.401613235s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197311401s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.18( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226108551s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021835327s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.18( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.226089478s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021835327s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.15( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368327141s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164138794s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.15( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368311882s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164138794s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.16( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.401547432s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197387695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.16( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.401532173s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197387695s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.19( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.225814819s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021820068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.15( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368190765s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164215088s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.19( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.225791931s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021820068s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.15( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368173599s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164215088s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.d( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.253394127s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.049514771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.d( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.253179550s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.049514771s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367814064s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164169312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367799759s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164169312s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.3( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367842674s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164230347s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.3( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367825508s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164230347s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.1( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.252544403s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.049041748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.3( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.225342751s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021865845s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.1( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.252529144s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.049041748s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.3( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.225325584s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021865845s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.f( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367684364s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164276123s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.f( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367666245s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164276123s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.e( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368156433s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164794922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.e( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368143082s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164794922s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.a( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.400831223s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197509766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.a( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.400818825s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197509766s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.5( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.225298882s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022048950s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.5( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.225282669s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022048950s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.7( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.252235413s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.049026489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.9( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368063927s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164871216s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.7( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.252220154s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.049026489s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.9( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.368048668s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164871216s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:07.953Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.8( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367891312s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164794922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.8( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367873192s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164794922s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.a( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367730141s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164810181s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.a( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367713928s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164810181s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.b( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367684364s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164825439s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.2( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.224740982s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021896362s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.b( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367668152s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164825439s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.2( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.224723816s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021896362s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.f( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367529869s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164855957s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.3( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.251099586s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.048431396s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.e( v 60'78 (0'0,60'78] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.400264740s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'78 lcod 59'77 mlcod 59'77 active pruub 220.197601318s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.f( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367494583s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164855957s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.3( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.251081467s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.048431396s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.e( v 60'78 (0'0,60'78] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.400234222s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'78 lcod 59'77 mlcod 0'0 unknown NOTIFY pruub 220.197601318s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.224576950s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022003174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.224548340s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022003174s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:07.955Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.f( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.400039673s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197586060s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.f( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.400023460s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197586060s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.d( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367240906s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164886475s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.c( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367235184s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164901733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.d( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367218018s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164886475s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.d( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367249489s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.164932251s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.8( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399894714s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197601318s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.c( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367215157s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164901733s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.d( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367236137s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164932251s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.8( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399878502s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197601318s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.b( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367108345s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.164947510s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.b( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367095947s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.164947510s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.3( v 60'78 (0'0,60'78] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399762154s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'78 lcod 59'77 mlcod 59'77 active pruub 220.197647095s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.a( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367233276s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.165130615s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.6( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223991394s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.021896362s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.a( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.367213249s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165130615s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.3( v 60'78 (0'0,60'78] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399719238s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'78 lcod 59'77 mlcod 0'0 unknown NOTIFY pruub 220.197647095s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.6( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223941803s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.021896362s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.c( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.224009514s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022018433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.c( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223998070s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022018433s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.5( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.251306534s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.049499512s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.7( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399448395s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197647095s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.8( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223793983s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022018433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.7( v 56'72 (0'0,56'72] local-lis/les=57/58 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399431229s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197647095s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[6.5( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.251287460s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.049499512s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.8( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223775864s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022018433s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366764069s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.165054321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=55/56 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366746902s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165054321s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.5( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366781235s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.165130615s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.5( v 41'9 (0'0,41'9] local-lis/les=55/56 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366766930s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165130615s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1a( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399265289s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active pruub 220.197662354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[11.1a( v 56'72 (0'0,56'72] local-lis/les=57/58 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.399248123s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.197662354s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.18( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366610527s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.165069580s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.18( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366599083s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165069580s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366520882s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.165054321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.1d( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366513252s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.165054321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366497993s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165054321s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1f( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223421097s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022048950s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.1d( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366498947s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165054321s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.1c( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366432190s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.165054321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.1f( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.223404884s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022048950s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.1c( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366413116s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165054321s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.12( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366336823s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.165100098s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.12( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366321564s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165100098s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366356850s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 218.165176392s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.13( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366329193s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active pruub 218.165145874s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=55/56 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366338730s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165176392s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[9.13( v 41'9 (0'0,41'9] local-lis/les=55/56 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=13.366308212s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.165145874s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.15( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.222717285s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active pruub 214.022094727s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:07 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 61 pg[4.15( empty local-lis/les=51/52 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=9.222697258s) [2] r=-1 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 214.022094727s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:07.994Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  7 04:44:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:07.995Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  7 04:44:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:08.001Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  7 04:44:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:08.001Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec  7 04:44:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:08 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac4001110 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:08 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 89e42c06-646b-411a-9988-769820a07c08 (Updating alertmanager deployment (+1 -> 1))
Dec  7 04:44:08 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 89e42c06-646b-411a-9988-769820a07c08 (Updating alertmanager deployment (+1 -> 1)) in 6 seconds
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 160ea464-a3b9-47f3-85ee-049534d6ef83 (Updating grafana deployment (+1 -> 1))
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  7 04:44:08 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec  7 04:44:08 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.1d( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.1e( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.18( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.12( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.17( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.10( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.19( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.a( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.8( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.6( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.9( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.1c( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.5( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.c( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.e( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.10( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.1b( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.b( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.e( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.a( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.6( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.c( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.3( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.19( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.2( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[12.8( v 58'1 (0'0,58'1] local-lis/les=61/62 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.6( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.4( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.f( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.14( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.b( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.13( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[5.1e( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 62 pg[7.3( empty local-lis/les=61/62 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec  7 04:44:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:09 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec  7 04:44:09 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec  7 04:44:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:09 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 475 B/s rd, 158 B/s wr, 0 op/s
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: Regenerating cephadm self-signed grafana TLS certificates
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: Deploying daemon grafana.compute-0 on compute-0
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  7 04:44:09 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.6( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.259818077s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.049530029s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.6( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.259658813s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.049530029s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.2( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.258338928s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.048461914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.2( v 46'39 (0'0,46'39] local-lis/les=53/54 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.258308411s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.048461914s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.254360199s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.044677734s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.a( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.258102417s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 216.048446655s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.254337311s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.044677734s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[6.a( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=9.258089066s) [1] r=-1 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.048446655s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:09.955Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000239989s
Dec  7 04:44:09 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec  7 04:44:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:10 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec  7 04:44:10 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 25 completed events
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:10 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:10 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 0df423db-3365-4e01-95ba-cc3104c44971 (Global Recovery Event) in 15 seconds
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 04:44:10 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:11 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:11 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec  7 04:44:11 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec  7 04:44:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 1 active+recovery_wait, 1 active+recovering+remapped, 8 unknown, 4 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 11 active+remapped, 4 peering, 1 active+recovering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/179 objects degraded (0.559%); 32/179 objects misplaced (17.877%); 857 B/s, 2 keys/s, 20 objects/s recovering
Dec  7 04:44:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  7 04:44:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  7 04:44:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  7 04:44:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/179 objects degraded (0.559%), 1 pg degraded (PG_DEGRADED)
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec  7 04:44:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:12 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec  7 04:44:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:12 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  7 04:44:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  7 04:44:12 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=4 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=4 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 66 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:12 np0005549474 ceph-mon[74516]: Health check failed: Degraded data redundancy: 1/179 objects degraded (0.559%), 1 pg degraded (PG_DEGRADED)
Dec  7 04:44:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:13 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec  7 04:44:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 1 active+recovery_wait, 1 active+recovering+remapped, 8 unknown, 4 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 11 active+remapped, 4 peering, 1 active+recovering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/179 objects degraded (0.559%); 32/179 objects misplaced (17.877%); 857 B/s, 2 keys/s, 20 objects/s recovering
Dec  7 04:44:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  7 04:44:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  7 04:44:13 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=4 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 67 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:14 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Dec  7 04:44:14 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Dec  7 04:44:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:14 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:14 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:15 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:15 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec  7 04:44:15 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec  7 04:44:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 1 active+recovery_wait, 1 active+recovering+remapped, 8 unknown, 4 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 11 active+remapped, 4 peering, 1 active+recovering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/179 objects degraded (0.559%); 32/179 objects misplaced (17.877%)
Dec  7 04:44:15 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 26 completed events
Dec  7 04:44:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:15 np0005549474 ceph-mgr[74811]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Dec  7 04:44:15 np0005549474 podman[99387]: 2025-12-07 09:44:15.928751623 +0000 UTC m=+6.279949476 container create 976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_grothendieck, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:15 np0005549474 systemd[1]: Started libpod-conmon-976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5.scope.
Dec  7 04:44:15 np0005549474 podman[99387]: 2025-12-07 09:44:15.905454355 +0000 UTC m=+6.256652248 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 04:44:15 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:15 np0005549474 podman[99387]: 2025-12-07 09:44:15.998500203 +0000 UTC m=+6.349698096 container init 976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_grothendieck, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 podman[99387]: 2025-12-07 09:44:16.004641979 +0000 UTC m=+6.355839822 container start 976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_grothendieck, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 podman[99387]: 2025-12-07 09:44:16.007397208 +0000 UTC m=+6.358595081 container attach 976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_grothendieck, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 intelligent_grothendieck[99607]: 472 0
Dec  7 04:44:16 np0005549474 systemd[1]: libpod-976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5.scope: Deactivated successfully.
Dec  7 04:44:16 np0005549474 podman[99387]: 2025-12-07 09:44:16.008464939 +0000 UTC m=+6.359662772 container died 976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_grothendieck, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c688a405dfd329bb60f64a4b634ed423653b3ef9f4792c83d3f9d9d2825c5b8d-merged.mount: Deactivated successfully.
Dec  7 04:44:16 np0005549474 podman[99387]: 2025-12-07 09:44:16.042171676 +0000 UTC m=+6.393369509 container remove 976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5 (image=quay.io/ceph/grafana:10.4.0, name=intelligent_grothendieck, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 systemd[1]: libpod-conmon-976c9a19b5f30e4b8697334e7a8ffde96102a96a71476521b16c5c5ddd6c26e5.scope: Deactivated successfully.
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.111581226 +0000 UTC m=+0.047750810 container create f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5 (image=quay.io/ceph/grafana:10.4.0, name=elastic_margulis, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 systemd[1]: Started libpod-conmon-f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5.scope.
Dec  7 04:44:16 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.170077014 +0000 UTC m=+0.106246608 container init f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5 (image=quay.io/ceph/grafana:10.4.0, name=elastic_margulis, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.175543241 +0000 UTC m=+0.111712825 container start f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5 (image=quay.io/ceph/grafana:10.4.0, name=elastic_margulis, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 elastic_margulis[99640]: 472 0
Dec  7 04:44:16 np0005549474 systemd[1]: libpod-f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5.scope: Deactivated successfully.
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.179252437 +0000 UTC m=+0.115422051 container attach f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5 (image=quay.io/ceph/grafana:10.4.0, name=elastic_margulis, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.179610958 +0000 UTC m=+0.115780552 container died f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5 (image=quay.io/ceph/grafana:10.4.0, name=elastic_margulis, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.089311307 +0000 UTC m=+0.025480911 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 04:44:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4808b599a39aaf1f2b583bbc0c9c14a98bc8d68e0e635081b78786d7268c840a-merged.mount: Deactivated successfully.
Dec  7 04:44:16 np0005549474 podman[99624]: 2025-12-07 09:44:16.218247885 +0000 UTC m=+0.154417479 container remove f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5 (image=quay.io/ceph/grafana:10.4.0, name=elastic_margulis, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:16 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec  7 04:44:16 np0005549474 systemd[1]: libpod-conmon-f9437cc2ab7c953bfa55e1e4956e13272a4759f8b3e5183887239e146f88b4d5.scope: Deactivated successfully.
Dec  7 04:44:16 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec  7 04:44:16 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:16 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:16 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:16 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:16 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:16 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:16 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:16 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:16 np0005549474 systemd[1]: Starting Ceph grafana.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:17 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:17 np0005549474 podman[99781]: 2025-12-07 09:44:17.167508771 +0000 UTC m=+0.046940917 container create 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0cbd381729b8617a0c6fe30ab1b1fb652594b53eb847a449211e828eef2f9/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0cbd381729b8617a0c6fe30ab1b1fb652594b53eb847a449211e828eef2f9/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0cbd381729b8617a0c6fe30ab1b1fb652594b53eb847a449211e828eef2f9/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0cbd381729b8617a0c6fe30ab1b1fb652594b53eb847a449211e828eef2f9/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa0cbd381729b8617a0c6fe30ab1b1fb652594b53eb847a449211e828eef2f9/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:17 np0005549474 podman[99781]: 2025-12-07 09:44:17.229483009 +0000 UTC m=+0.108915205 container init 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:17 np0005549474 podman[99781]: 2025-12-07 09:44:17.236622064 +0000 UTC m=+0.116054210 container start 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:17 np0005549474 podman[99781]: 2025-12-07 09:44:17.145790708 +0000 UTC m=+0.025222884 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 04:44:17 np0005549474 bash[99781]: 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860
Dec  7 04:44:17 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec  7 04:44:17 np0005549474 systemd[1]: Started Ceph grafana.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:17 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:17 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 160ea464-a3b9-47f3-85ee-049534d6ef83 (Updating grafana deployment (+1 -> 1))
Dec  7 04:44:17 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 160ea464-a3b9-47f3-85ee-049534d6ef83 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:17 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev faf3167b-877d-4dc0-9365-613025671afb (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:17 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.toeiml on compute-0
Dec  7 04:44:17 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.toeiml on compute-0
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.412703064Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-07T09:44:17Z
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.4132499Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.41327405Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413283031Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413292871Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413300941Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413308841Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413316212Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413324052Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413332602Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413339852Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413347342Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413355253Z level=info msg=Target target=[all]
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413369653Z level=info msg="Path Home" path=/usr/share/grafana
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413377163Z level=info msg="Path Data" path=/var/lib/grafana
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413384623Z level=info msg="Path Logs" path=/var/log/grafana
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413393734Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413401354Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=settings t=2025-12-07T09:44:17.413408514Z level=info msg="App mode production"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore t=2025-12-07T09:44:17.414001151Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore t=2025-12-07T09:44:17.414039362Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.415425352Z level=info msg="Starting DB migrations"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.417737579Z level=info msg="Executing migration" id="create migration_log table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.419968932Z level=info msg="Migration successfully executed" id="create migration_log table" duration=2.229593ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.422488504Z level=info msg="Executing migration" id="create user table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.423880765Z level=info msg="Migration successfully executed" id="create user table" duration=1.391781ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.426280194Z level=info msg="Executing migration" id="add unique index user.login"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.428379793Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=2.099309ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.431582305Z level=info msg="Executing migration" id="add unique index user.email"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.432896333Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.313398ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.435039105Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.436413684Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.373949ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.438573946Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.439835102Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.261175ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.441885891Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.446186554Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.296093ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.448575833Z level=info msg="Executing migration" id="create user table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.449920441Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.343858ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.45195863Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.452929718Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=973.167µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.454992507Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.455892392Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=896.915µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.457934201Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.458620261Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=689.98µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.460264708Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.460773443Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=508.515µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.461935736Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.463023427Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.087761ms
Dec  7 04:44:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 337 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 92 op/s; 325 B/s, 11 objects/s recovering
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.464604992Z level=info msg="Executing migration" id="Update user table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.464671354Z level=info msg="Migration successfully executed" id="Update user table charset" duration=68.402µs
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.466406954Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.467575457Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.169833ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.469150123Z level=info msg="Executing migration" id="Add missing user data"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.46940636Z level=info msg="Migration successfully executed" id="Add missing user data" duration=255.997µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.471100659Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.472019765Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=917.326µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.473346543Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.473896269Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=549.166µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.475349231Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.476318378Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=968.248µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.477948505Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.484380959Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.432094ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.486156541Z level=info msg="Executing migration" id="Add uid column to user"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.487015155Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=858.474µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.488639802Z level=info msg="Executing migration" id="Update uid column values for users"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.488847148Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=208.026µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.490690361Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.491465443Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=774.612µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.493494521Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.494351366Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=855.715µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.496406365Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.496972471Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=566.106µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.498635028Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.499177904Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=545.706µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.50077743Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.501441759Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=664.099µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.502916081Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.503534619Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=615.688µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.505155626Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.505179356Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=24.01µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.506698589Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.507344249Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=645.94µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.508587564Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.509217602Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=608.297µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.510633823Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.511214319Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=580.716µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.512857467Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.513514996Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=657.408µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.515490581Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.518191049Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.698118ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.519860507Z level=info msg="Executing migration" id="create temp_user v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.520809204Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=948.087µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.522815502Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.52344885Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=633.268µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.525337354Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.52590569Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=569.976µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.527462086Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.528104294Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=641.938µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.529881695Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.530461231Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=640.428µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.532230622Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.532549882Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=318.01µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.534454536Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.534933609Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=478.633µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.536439273Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.536745172Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=305.388µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.538400819Z level=info msg="Executing migration" id="create star table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.538924784Z level=info msg="Migration successfully executed" id="create star table" duration=523.705µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.540693364Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.541298312Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=604.498µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.542744843Z level=info msg="Executing migration" id="create org table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.543417682Z level=info msg="Migration successfully executed" id="create org table v1" duration=672.359µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.544958957Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.54576296Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=803.513µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.547447329Z level=info msg="Executing migration" id="create org_user table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.548079856Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=631.667µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.549700043Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.550319741Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=619.918µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.552162194Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.552782192Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=619.318µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.554424858Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.555046587Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=620.639µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.556734395Z level=info msg="Executing migration" id="Update org table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.556758615Z level=info msg="Migration successfully executed" id="Update org table charset" duration=25.65µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.558320741Z level=info msg="Executing migration" id="Update org_user table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.558342581Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=22.861µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.559872095Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.560015209Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=143.084µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.56143107Z level=info msg="Executing migration" id="create dashboard table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.562084588Z level=info msg="Migration successfully executed" id="create dashboard table" duration=653.278µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.563631293Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.564398285Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=766.312µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.566818884Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.567750571Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=927.926µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.572990821Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.574505455Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.516434ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.577438799Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.578282313Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=847.144µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.579999382Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.580631171Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=632.418µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.582180474Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.586697404Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.51384ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.588349591Z level=info msg="Executing migration" id="create dashboard v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.58899737Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=647.289µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.590689408Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.591309447Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=620.129µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.592964984Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.593929591Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=967.487µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.595510257Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.595811715Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=301.568µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.597248107Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.598121992Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=873.805µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.599821611Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.599898563Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=77.062µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.601648263Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.603498996Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.850633ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.60503311Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.606371029Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.335578ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.60782636Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.609106547Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.279987ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.61027654Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.610912109Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=633.519µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.612608737Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.613954536Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.345289ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.615370026Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.615964973Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=594.667µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.61723938Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.617817027Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=577.327µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.619113893Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.619132654Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=19.401µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.620896745Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.620916436Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=20.39µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.622432359Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.623932212Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.499593ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.625169288Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.62701564Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.837162ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.628337389Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.630055727Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.718128ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.631582052Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.632974061Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.391909ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.634086004Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.634273389Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=187.516µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.635932456Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.636523884Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=591.388µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.63781168Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.638413217Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=601.457µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.639623562Z level=info msg="Executing migration" id="Update dashboard title length"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.639640933Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=17.761µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.641127875Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.641767414Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=639.009µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.643136013Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.64373149Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=596.357µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.645344166Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.649039192Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.692796ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.650362711Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.651072511Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=708.39µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.652491651Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.653330126Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=838.305µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.65491237Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.655563489Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=649.159µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.656815455Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.657086293Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=271.268µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.658440051Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.658896635Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=456.134µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.660335066Z level=info msg="Executing migration" id="Add check_sum column"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.661773678Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.438632ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.663778335Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.664406963Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=607.209µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.666300317Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.666434991Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=135.054µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.667625526Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.667753439Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=128.424µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.669173509Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.669812958Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=639.949µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.671372703Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.672895547Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.522814ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.674238415Z level=info msg="Executing migration" id="create data_source table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.674938245Z level=info msg="Migration successfully executed" id="create data_source table" duration=699.15µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.676664425Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.677544789Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=880.405µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.679216118Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.680035771Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=842.974µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.681615877Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.682261195Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=645.218µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.684090148Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.68488822Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=799.372µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.687253868Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.691619133Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.363175ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.693365314Z level=info msg="Executing migration" id="create data_source table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.694069694Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=704.4µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.695929147Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.696649348Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=719.891µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.698243974Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.698979065Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=734.551µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.700681314Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.701329672Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=605.937µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.703047561Z level=info msg="Executing migration" id="Add column with_credentials"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.706085808Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.036767ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.707804688Z level=info msg="Executing migration" id="Add secure json data column"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.710102574Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.296696ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.711983408Z level=info msg="Executing migration" id="Update data_source table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.712013019Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.931µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.713690476Z level=info msg="Executing migration" id="Update initial version to 1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.713880092Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=191.476µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.715395196Z level=info msg="Executing migration" id="Add read_only data column"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.717692212Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.297087ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.719567855Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.71975643Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=188.915µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.721444589Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.721607984Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=163.744µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.723193229Z level=info msg="Executing migration" id="Add uid column"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.725129674Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.936415ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.726674589Z level=info msg="Executing migration" id="Update uid value"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.726939437Z level=info msg="Migration successfully executed" id="Update uid value" duration=264.848µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.728985855Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.729972123Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=986.748µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.731891979Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.732525017Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=632.708µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.734026489Z level=info msg="Executing migration" id="create api_key table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.734683049Z level=info msg="Migration successfully executed" id="create api_key table" duration=656.51µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.736361936Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.736947974Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=585.948µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.738619041Z level=info msg="Executing migration" id="add index api_key.key"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.739564538Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=949.327µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.740968059Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.741832403Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=864.714µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.743546093Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.744266073Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=717.37µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.74554318Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.746134117Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=592.707µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.747687711Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.74831157Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=623.849µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.749896005Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.754641911Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.746126ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.756240096Z level=info msg="Executing migration" id="create api_key table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.75705259Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=811.934µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.758733348Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.759517461Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=784.033µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.76090026Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.761500917Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=600.427µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.763157635Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.763794113Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=636.228µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.766529612Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.766964734Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=434.802µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.768481268Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.768957002Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=476.384µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.770503365Z level=info msg="Executing migration" id="Update api_key table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.770527816Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.481µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.772112412Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.773901024Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.788312ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.775397916Z level=info msg="Executing migration" id="Add service account foreign key"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.777237949Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.839413ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.778611998Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.778745702Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=135.954µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.780167433Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.783625922Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.455159ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.785362812Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.787506684Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.144182ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.788997776Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.789642235Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=644.048µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.790948172Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.791476087Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=527.895µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.792878317Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.793552847Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=676.07µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.796480951Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.797115919Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=635.048µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.801719322Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.802407841Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=685.779µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.804830261Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.805560411Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=729.76µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.80759308Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.807660962Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=68.782µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.809864385Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.809989029Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=128.324µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.811879262Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.814136307Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.256425ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.815542208Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.817971687Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.428139ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.819633195Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.819683036Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=51.051µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.822099376Z level=info msg="Executing migration" id="create quota table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.822815876Z level=info msg="Migration successfully executed" id="create quota table v1" duration=716.85µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.824257238Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.824938607Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=678.639µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.826400939Z level=info msg="Executing migration" id="Update quota table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.826425209Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=25.36µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.828089038Z level=info msg="Executing migration" id="create plugin_setting table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.828785157Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=696.47µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.830150427Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.830845907Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=692.599µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.832461803Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.834703107Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.240944ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.836056366Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.836077187Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=21.621µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.837573479Z level=info msg="Executing migration" id="create session table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.838310411Z level=info msg="Migration successfully executed" id="create session table" duration=736.622µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.839750942Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.839820594Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=70.032µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.841180463Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.841264896Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=84.043µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.842517711Z level=info msg="Executing migration" id="create playlist table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.843078497Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=560.316µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.844603441Z level=info msg="Executing migration" id="create playlist item table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.845207618Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=587.897µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.847419782Z level=info msg="Executing migration" id="Update playlist table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.847441913Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=20.841µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.848802091Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.848821272Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=19.551µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.850228913Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.852605271Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.375408ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.854158305Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.856635726Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.477031ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.858056617Z level=info msg="Executing migration" id="drop preferences table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.858131129Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=75.052µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.859841698Z level=info msg="Executing migration" id="drop preferences table v3"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.85991039Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=68.932µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.861319681Z level=info msg="Executing migration" id="create preferences table v3"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.86198913Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=669.209µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.86336567Z level=info msg="Executing migration" id="Update preferences table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.86338337Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=18.32µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.864812401Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.867491507Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.678196ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.868918399Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.869057733Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=139.494µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.870492744Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.873405027Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.909124ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.875309972Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.877856655Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.545313ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.879594364Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.879662206Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=68.462µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.881704895Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.882748856Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.044191ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.884724902Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec  7 04:44:17 np0005549474 podman[99906]: 2025-12-07 09:44:17.884248008 +0000 UTC m=+0.039580946 container create 74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7 (image=quay.io/ceph/haproxy:2.3, name=zealous_yalow)
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.885456943Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=731.671µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.887409449Z level=info msg="Executing migration" id="create alert table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.88847118Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.061821ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.890140897Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.890989981Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=849.324µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.892736252Z level=info msg="Executing migration" id="add index alert state"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.893429971Z level=info msg="Migration successfully executed" id="add index alert state" duration=694.069µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.895385428Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.896259892Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=874.855µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.8982619Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.899043522Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=779.232µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.901211335Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.902062209Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=850.804µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.903807409Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.904512579Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=704.71µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.906043703Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.913654732Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.608999ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.915247247Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.915850624Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=603.817µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.917324507Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.917973855Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=649.218µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.919708786Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.919957463Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=251.468µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.921495646Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.921975771Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=480.155µs
Dec  7 04:44:17 np0005549474 systemd[1]: Started libpod-conmon-74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7.scope.
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.923394981Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.923976817Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=581.526µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.925571024Z level=info msg="Executing migration" id="Add column is_default"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.92859752Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.025936ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.930119463Z level=info msg="Executing migration" id="Add column frequency"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.933481641Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.361137ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.935475577Z level=info msg="Executing migration" id="Add column send_reminder"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.938860255Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.383498ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.940476551Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.943138647Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.664126ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.944593189Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.945279949Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=686.49µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.946792322Z level=info msg="Executing migration" id="Update alert table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.946816272Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=24.6µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.948698407Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.948723897Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.53µs
Dec  7 04:44:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.950237451Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.950837408Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=599.737µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.952357251Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.953031931Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=674.57µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.954637447Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.955509373Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=868.565µs
Dec  7 04:44:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.957399826Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:17.957Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002182712s
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.958188938Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=789.192µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.960170956Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.96136817Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.197224ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.962996707Z level=info msg="Executing migration" id="Add for to alert table"
Dec  7 04:44:17 np0005549474 podman[99906]: 2025-12-07 09:44:17.867189799 +0000 UTC m=+0.022522757 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.966380313Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.382166ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.968223827Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.971377927Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.15442ms
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.97357611Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.973744555Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=168.525µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.976013Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.976748391Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=735.081µs
Dec  7 04:44:17 np0005549474 podman[99906]: 2025-12-07 09:44:17.977730659 +0000 UTC m=+0.133063607 container init 74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7 (image=quay.io/ceph/haproxy:2.3, name=zealous_yalow)
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.978589184Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.979454419Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=864.855µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.981435096Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.984344509Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.908883ms
Dec  7 04:44:17 np0005549474 podman[99906]: 2025-12-07 09:44:17.984469903 +0000 UTC m=+0.139802841 container start 74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7 (image=quay.io/ceph/haproxy:2.3, name=zealous_yalow)
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.985978616Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.986040267Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=63.512µs
Dec  7 04:44:17 np0005549474 podman[99906]: 2025-12-07 09:44:17.9878534 +0000 UTC m=+0.143186358 container attach 74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7 (image=quay.io/ceph/haproxy:2.3, name=zealous_yalow)
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.988238091Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.989030673Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=789.502µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.991035451Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.991979539Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=943.767µs
Dec  7 04:44:17 np0005549474 zealous_yalow[99922]: 0 0
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.993745839Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.993834241Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=88.733µs
Dec  7 04:44:17 np0005549474 podman[99906]: 2025-12-07 09:44:17.994030227 +0000 UTC m=+0.149363185 container died 74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7 (image=quay.io/ceph/haproxy:2.3, name=zealous_yalow)
Dec  7 04:44:17 np0005549474 systemd[1]: libpod-74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7.scope: Deactivated successfully.
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.995723406Z level=info msg="Executing migration" id="create annotation table v5"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.99657625Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=853.084µs
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.998478955Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec  7 04:44:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:17.999306928Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=827.853µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.000971306Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.001834261Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=861.985µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.003430397Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.00424044Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=807.503µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.005732222Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.006532576Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=801.244µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.008125411Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.009070729Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=944.767µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.011034904Z level=info msg="Executing migration" id="Update annotation table charset"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.011058175Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.231µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.012713323Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.016675336Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.958093ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.0192376Z level=info msg="Executing migration" id="Drop category_id index"
Dec  7 04:44:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-da38bc5ca7a227c0f0437b5f90aba6df8c840f48fb442a1bfd2e455982755584-merged.mount: Deactivated successfully.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.020795595Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.557565ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.022931956Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.02586136Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.928624ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.028421214Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.02898702Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=567.476µs
Dec  7 04:44:18 np0005549474 podman[99906]: 2025-12-07 09:44:18.030505153 +0000 UTC m=+0.185838091 container remove 74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7 (image=quay.io/ceph/haproxy:2.3, name=zealous_yalow)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.031107351Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.031833561Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=726.07µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.033369555Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.034046885Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=676.89µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.03596988Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec  7 04:44:18 np0005549474 systemd[1]: libpod-conmon-74b145823b0ef1da92bf14e3539b228b76d7012e19828e9d82383bf8be9de5b7.scope: Deactivated successfully.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.045191735Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.210995ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.046956225Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.047718456Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=762.541µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.04923149Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.049901289Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=670.209µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.051425783Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.05167755Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=252.437µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.053184923Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.053704239Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=519.146µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.055257553Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.055419367Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=161.544µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.056819518Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.060109332Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.288574ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.061570514Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.064871179Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.300365ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.066897977Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.067704Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=805.974µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.069297025Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.069960834Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=664.009µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.071790357Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.071982043Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=195.756µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.073518687Z level=info msg="Executing migration" id="Add epoch_end column"
Dec  7 04:44:18 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.076514813Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.994976ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.07783531Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.078586822Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=751.892µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.079956801Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.080118926Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=165.595µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.081767114Z level=info msg="Executing migration" id="Move region to single row"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.082074292Z level=info msg="Migration successfully executed" id="Move region to single row" duration=309.918µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.08341298Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.084130451Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=717.621µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.08585522Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.086602502Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=746.812µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.088029843Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.088766084Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=734.831µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.091250745Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.091980226Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=728.601µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.093954783Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.094672573Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=718.19µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.096249549Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.097026381Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=776.832µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.098459552Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.098572766Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=114.084µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.099981316Z level=info msg="Executing migration" id="create test_data table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.100633794Z level=info msg="Migration successfully executed" id="create test_data table" duration=652.428µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.102257511Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.102852518Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=594.477µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.105583796Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.106304677Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=720.511µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.108070588Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.108929112Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=856.434µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.111551627Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.112062812Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=518.495µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.114179773Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.114620315Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=440.172µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.116423287Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.11651422Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=90.983µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.118434165Z level=info msg="Executing migration" id="create team table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.119405143Z level=info msg="Migration successfully executed" id="create team table" duration=969.958µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.120905946Z level=info msg="Executing migration" id="add index team.org_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.121998177Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.092821ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.124464198Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.125304422Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=839.444µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.126994781Z level=info msg="Executing migration" id="Add column uid in team"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.130389888Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.393517ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.132617762Z level=info msg="Executing migration" id="Update uid column values in team"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.132781507Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=169.395µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.134360071Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.135158705Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=797.584µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.137865303Z level=info msg="Executing migration" id="create team member table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.138648395Z level=info msg="Migration successfully executed" id="create team member table" duration=783.642µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.140371015Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.141053304Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=682.479µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.143160675Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.144306397Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.149943ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.146270704Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.147036655Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=765.361µs
Dec  7 04:44:18 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.149212848Z level=info msg="Executing migration" id="Add column email to team table"
Dec  7 04:44:18 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.153215752Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.001464ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.155443436Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.159117662Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.666555ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.160864142Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.164120445Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.256033ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.165888956Z level=info msg="Executing migration" id="create dashboard acl table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.16671601Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=826.554µs
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: Deploying daemon haproxy.rgw.default.compute-0.toeiml on compute-0
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.168912643Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.170259331Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.347248ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.173536365Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.174830643Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.301328ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.177005755Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.177800888Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=798.153µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.179850757Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.180529096Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=678.099µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.182552224Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.18344181Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=889.775µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.185245542Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.186063405Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=818.033µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.188097203Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.189091311Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=993.678µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.191503031Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.192121788Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=620.287µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.19425695Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.194457606Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=201.136µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.196487904Z level=info msg="Executing migration" id="create tag table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.197124592Z level=info msg="Migration successfully executed" id="create tag table" duration=636.348µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.199100698Z level=info msg="Executing migration" id="add index tag.key_value"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.199876181Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=774.883µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.201742804Z level=info msg="Executing migration" id="create login attempt table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.202359032Z level=info msg="Migration successfully executed" id="create login attempt table" duration=615.968µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.204161013Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.204904565Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=743.112µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.206575253Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.207338324Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=762.721µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.209466076Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.219844984Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.377868ms
Dec  7 04:44:18 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.22149207Z level=info msg="Executing migration" id="create login_attempt v2"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.222076178Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=584.958µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.223819237Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.224615561Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=796.114µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.22671068Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.227012449Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=302.378µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.228833541Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.229387517Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=553.626µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.230940502Z level=info msg="Executing migration" id="create user auth table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.231606491Z level=info msg="Migration successfully executed" id="create user auth table" duration=667.289µs
Dec  7 04:44:18 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.234172934Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.234903405Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=730.341µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.236823951Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.236876272Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=53.382µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.23823363Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.242096192Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.862482ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.243978115Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.247639591Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.661386ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.24903653Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.252856661Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.820021ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.254381514Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.258612946Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.227731ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.261353784Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.262161287Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=808.023µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.264050232Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.268266472Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.216ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.270330382Z level=info msg="Executing migration" id="create server_lock table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.271029651Z level=info msg="Migration successfully executed" id="create server_lock table" duration=699.129µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.272628858Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.273400509Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=771.151µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.274907023Z level=info msg="Executing migration" id="create user auth token table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.275747687Z level=info msg="Migration successfully executed" id="create user auth token table" duration=840.634µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.277683443Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.278473895Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=790.262µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.280160723Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.280966557Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=805.634µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.282411438Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.283226142Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=814.334µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.284739315Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.288523243Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.783298ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.290050087Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.290796308Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=747.461µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.292347823Z level=info msg="Executing migration" id="create cache_data table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.293170716Z level=info msg="Migration successfully executed" id="create cache_data table" duration=822.583µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.294942808Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.295762091Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=819.593µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.297011636Z level=info msg="Executing migration" id="create short_url table v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.297742118Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=729.992µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.299250841Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.300041964Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=790.463µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.301639919Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.301697781Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=52.702µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.303628317Z level=info msg="Executing migration" id="delete alert_definition table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.30373047Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=102.853µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.305731277Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.306587861Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=856.834µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.308801865Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.311478032Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.675897ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.314056266Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.31490563Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=851.334µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.316950559Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.317007301Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=56.972µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.3183904Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.321493639Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=3.071678ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.327396548Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.328563792Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.170324ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.331222368Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.332252207Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.08698ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.333897625Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.334725928Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=827.623µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.336609503Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.342392559Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.782806ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.343823409Z level=info msg="Executing migration" id="drop alert_definition table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.345270911Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.446462ms
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.347717011Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.347832705Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=407.013µs
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/179 objects degraded (0.559%), 1 pg degraded)
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.350226713Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.351005536Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=776.133µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.35256445Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.353419385Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=855.175µs
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.355465583Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.356416071Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=950.478µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.358526851Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.358655894Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=135.104µs
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.361144446Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.362526746Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.38134ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.365256264Z level=info msg="Executing migration" id="create alert_instance table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.366295404Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.03913ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.370080373Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.37104667Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=965.077µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.373310905Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.374227731Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=915.076µs
Dec  7 04:44:18 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.376720383Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.381461739Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.738896ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.383338392Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.384131276Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=791.994µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.385919837Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.386660097Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=740.891µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.388327716Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:18 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.422358722Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=34.023215ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.424275327Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.445866006Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.585179ms
Dec  7 04:44:18 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:18 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.450724065Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.451718184Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=992.539µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.453668069Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.454386201Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=717.972µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.456041128Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.460422464Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.381076ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.461878235Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.465897221Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.019106ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.46726799Z level=info msg="Executing migration" id="create alert_rule table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.468020671Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=752.291µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.469681979Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.470480372Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=798.413µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.472788968Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.475362192Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.573064ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.478612025Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.481397935Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.78545ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.483694891Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.483745002Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=48.501µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.485375299Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.489735314Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.359665ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.491277559Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.495480499Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.20256ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.496928791Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.50146586Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.534399ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.504485027Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.505356452Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=871.455µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.506916737Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.507792422Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=875.295µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.509350697Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.513424664Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.073648ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.514934907Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.519514788Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.578211ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.521281059Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.522055361Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=774.053µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.52377647Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.528026673Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.249562ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.529436462Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.533649064Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.212532ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.535548589Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.535790835Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=242.656µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.537430632Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.538541444Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.110832ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.540257293Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.541049056Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=789.573µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.542792366Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.543739583Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=947.177µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.54571848Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.545762621Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=44.511µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.54711987Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.551503896Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.385745ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.55304046Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.557289071Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.248391ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.55896198Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.563304834Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.342474ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.565155587Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.569491582Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.340165ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.57152309Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.57606607Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.543311ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.577626455Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.577679707Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=53.952µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.579778216Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.580503798Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=725.202µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.582143354Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.587328883Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.170849ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.590443992Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.590497674Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=54.922µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.592290456Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.596905988Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.614951ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.59870834Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.599642756Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=936.137µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.601538831Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.605860155Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.321274ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.607522022Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.608180962Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=658.55µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.609787347Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.610516198Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=728.391µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.611789525Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.616222292Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.431686ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.618465246Z level=info msg="Executing migration" id="create provenance_type table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.619082684Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=617.278µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.620870415Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.621619306Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=750.921µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.623684236Z level=info msg="Executing migration" id="create alert_image table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.624308624Z level=info msg="Migration successfully executed" id="create alert_image table" duration=623.488µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.625780876Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.626510517Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=729.421µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.628158755Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.628219096Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=60.901µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.630187182Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.630935904Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=749.092µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.632782897Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.635095943Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=2.306586ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.637045689Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.637538023Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.639525301Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.640185389Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=667.619µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.64721558Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.648699133Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.501643ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.65065494Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.655677753Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.021933ms
Dec  7 04:44:18 np0005549474 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.toeiml for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.657742713Z level=info msg="Executing migration" id="create library_element table v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.658698351Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=955.538µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.660827551Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.661742408Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=914.807µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.66322764Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.663892889Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=666.599µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.665770833Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.666614527Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=843.324µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.668060559Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.6688073Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=746.281µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.67019269Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.670227491Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=39.031µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.672117875Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.672162246Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=44.811µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.674005459Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.67438455Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=380.821µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.679316522Z level=info msg="Executing migration" id="create data_keys table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.680376432Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.06105ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.682094351Z level=info msg="Executing migration" id="create secrets table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.68275741Z level=info msg="Migration successfully executed" id="create secrets table" duration=663.039µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.684281834Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:18 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa980016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.712950206Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.658242ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.715105678Z level=info msg="Executing migration" id="add name column into data_keys"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.720515794Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.409556ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.72460421Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.724765595Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=162.295µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.751522223Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.780003089Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.482056ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.781980697Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.809221927Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.237741ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.813006566Z level=info msg="Executing migration" id="create kv_store table v1"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.813777198Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=770.352µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.840883296Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.841895365Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.010028ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.843647885Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.843837711Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=189.776µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.846040523Z level=info msg="Executing migration" id="create permission table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.847000351Z level=info msg="Migration successfully executed" id="create permission table" duration=959.188µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.848906525Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.849697289Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=793.283µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.851497409Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.852308503Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=811.074µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.857130101Z level=info msg="Executing migration" id="create role table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.857837402Z level=info msg="Migration successfully executed" id="create role table" duration=706.911µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.859311794Z level=info msg="Executing migration" id="add column display_name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.864731489Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.418285ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.866935203Z level=info msg="Executing migration" id="add column group_name"
Dec  7 04:44:18 np0005549474 podman[100063]: 2025-12-07 09:44:18.87241838 +0000 UTC m=+0.042511470 container create ad85b4eacad1a0425a520d7609895a37aa19f2d7fda82bea9975503024e47715 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-rgw-default-compute-0-toeiml)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.873559642Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.621359ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.877505126Z level=info msg="Executing migration" id="add index role.org_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.879383909Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.879763ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.881929402Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.883162728Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.231836ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.885280108Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.886096843Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=816.284µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.887722889Z level=info msg="Executing migration" id="create team role table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.888421819Z level=info msg="Migration successfully executed" id="create team role table" duration=698.32µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.889819719Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.890661643Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=841.264µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.892116435Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.89300451Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=887.645µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.894637797Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.89544163Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=804.073µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.89682693Z level=info msg="Executing migration" id="create user role table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.897501169Z level=info msg="Migration successfully executed" id="create user role table" duration=673.609µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.899512306Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.900330661Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=818.005µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.901814053Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.902624747Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=810.304µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.904342185Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.905088896Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=746.481µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.906907159Z level=info msg="Executing migration" id="create builtin role table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.907670381Z level=info msg="Migration successfully executed" id="create builtin role table" duration=763.001µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.909398121Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.910282196Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=883.865µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.912620923Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.913650903Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.03071ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.915594638Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.921639091Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.043513ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.923246468Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.92403518Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=788.402µs
Dec  7 04:44:18 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd4e628523b554924fedd727f8e9991ab3f26b52f4b007f3f686575c70b31c0/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.925698098Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.926501511Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=803.053µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.928265842Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.929040404Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=774.542µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.930668661Z level=info msg="Executing migration" id="add unique index role.uid"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.931477124Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=808.553µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.933334837Z level=info msg="Executing migration" id="create seed assignment table"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.933907074Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=572.047µs
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.935817008Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.936608521Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=790.753µs
Dec  7 04:44:18 np0005549474 podman[100063]: 2025-12-07 09:44:18.937700862 +0000 UTC m=+0.107793972 container init ad85b4eacad1a0425a520d7609895a37aa19f2d7fda82bea9975503024e47715 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-rgw-default-compute-0-toeiml)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.938400932Z level=info msg="Executing migration" id="add column hidden to role table"
Dec  7 04:44:18 np0005549474 podman[100063]: 2025-12-07 09:44:18.943817157 +0000 UTC m=+0.113910237 container start ad85b4eacad1a0425a520d7609895a37aa19f2d7fda82bea9975503024e47715 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-rgw-default-compute-0-toeiml)
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.944144647Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.742785ms
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.946385141Z level=info msg="Executing migration" id="permission kind migration"
Dec  7 04:44:18 np0005549474 bash[100063]: ad85b4eacad1a0425a520d7609895a37aa19f2d7fda82bea9975503024e47715
Dec  7 04:44:18 np0005549474 podman[100063]: 2025-12-07 09:44:18.855472064 +0000 UTC m=+0.025565164 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-rgw-default-compute-0-toeiml[100079]: [NOTICE] 340/094418 (2) : New worker #1 (4) forked
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.952765305Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.376444ms
Dec  7 04:44:18 np0005549474 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.toeiml for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.955546535Z level=info msg="Executing migration" id="permission attribute migration"
Dec  7 04:44:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:18.961259028Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.711825ms
Dec  7 04:44:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000056s ======
Dec  7 04:44:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Dec  7 04:44:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:19.012985742Z level=info msg="Executing migration" id="permission identifier migration"
Dec  7 04:44:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:19.018806338Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.822437ms
Dec  7 04:44:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:19 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa980016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:19 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  7 04:44:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:19.235902335Z level=info msg="Executing migration" id="add permission identifier index"
Dec  7 04:44:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:19.237047188Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.148553ms
Dec  7 04:44:19 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  7 04:44:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v81: 337 pgs: 337 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 0 B/s wr, 85 op/s; 300 B/s, 10 objects/s recovering
Dec  7 04:44:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  7 04:44:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 04:44:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  7 04:44:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:44:19 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:44:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:20 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:20 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa980016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:20 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:20 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 27 completed events
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.901222148Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.902771793Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.578046ms
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.906598632Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.907598841Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.000819ms
Dec  7 04:44:20 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.911457292Z level=info msg="Executing migration" id="create query_history table v1"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.912283445Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=827.933µs
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/179 objects degraded (0.559%), 1 pg degraded)
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: Cluster is now healthy
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.915059725Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.915919179Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=857.514µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.918873624Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.918917625Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=44.781µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.920978685Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.921005875Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=28.09µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.923295221Z level=info msg="Executing migration" id="teams permissions migration"
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.923697313Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=402.222µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.925979038Z level=info msg="Executing migration" id="dashboard permissions"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.926370889Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=394.251µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.928425679Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.929130979Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=704.06µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.930932781Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.931168967Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=235.886µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.933893315Z level=info msg="Executing migration" id="alerting notification permissions"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.934332398Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=438.993µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.936370887Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.937372175Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.000898ms
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.939679471Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.940653009Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=972.578µs
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.942326407Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.947930028Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.603541ms
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.949517803Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.949570005Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=52.572µs
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.95114308Z level=info msg="Executing migration" id="create correlation table v1"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.952985393Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.842043ms
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.954901978Z level=info msg="Executing migration" id="add index correlations.uid"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.956045431Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.143263ms
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.958001506Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.959094068Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.092282ms
Dec  7 04:44:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.961192588Z level=info msg="Executing migration" id="add correlation config column"
Dec  7 04:44:20 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.soidop on compute-2
Dec  7 04:44:20 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.soidop on compute-2
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.968763515Z level=info msg="Migration successfully executed" id="add correlation config column" duration=7.570747ms
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.970698961Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.97171032Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.011129ms
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.973650846Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec  7 04:44:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:20.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.974620203Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=966.867µs
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.976620181Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.997258312Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.637821ms
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.998663023Z level=info msg="Executing migration" id="create correlation v2"
Dec  7 04:44:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:20.999736953Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.07366ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.001659189Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.002626796Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=967.197µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.004341336Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.005625902Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.284186ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.007801605Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.008759282Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=957.397µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.010518293Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.010743609Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=225.396µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.012556382Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.013374905Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=818.294µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.015436854Z level=info msg="Executing migration" id="add provisioning column"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.023001671Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.557797ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.024752331Z level=info msg="Executing migration" id="create entity_events table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.025513473Z level=info msg="Migration successfully executed" id="create entity_events table" duration=760.911µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.027403497Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.028393575Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=989.528µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.030171766Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.030569218Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.032193605Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.032585876Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.034329036Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.035288663Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=959.467µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.037322762Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.038256619Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=934.926µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.040242216Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.041183002Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=940.226µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.043040046Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.044066885Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.026339ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.045767994Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.046736532Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=968.868µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.048642407Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.049612234Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=969.657µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.051158509Z level=info msg="Executing migration" id="Drop public config table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.052043894Z level=info msg="Migration successfully executed" id="Drop public config table" duration=885.105µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.053620029Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.054609088Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=988.749µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.05676136Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.057732017Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=971.637µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.059669273Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.060750024Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.080011ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.062663899Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.063724049Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.05929ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.065583892Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.090779775Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.202813ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.092569586Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.10038506Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.815034ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.101908904Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.10944189Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.532316ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.111068807Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.111285093Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=216.286µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.113022573Z level=info msg="Executing migration" id="add share column"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:21 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.120938599Z level=info msg="Migration successfully executed" id="add share column" duration=7.914656ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.122385681Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.122574847Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=189.176µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.12443316Z level=info msg="Executing migration" id="create file table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.125325996Z level=info msg="Migration successfully executed" id="create file table" duration=892.626µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.127437546Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.128471616Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.03138ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.130218456Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.13243513Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.216514ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.134513849Z level=info msg="Executing migration" id="create file_meta table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.135162808Z level=info msg="Migration successfully executed" id="create file_meta table" duration=648.939µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.13697613Z level=info msg="Executing migration" id="file table idx: path key"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.137855215Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=878.505µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.139759739Z level=info msg="Executing migration" id="set path collation in file table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.139809161Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=51.542µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.141257363Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.141304124Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=47.341µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.142952881Z level=info msg="Executing migration" id="managed permissions migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.143381873Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=428.322µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.145052591Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.145230206Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=177.895µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.146881644Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.148057588Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.175393ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.14988403Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.156718176Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.831876ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.158778165Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.158914259Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=137.364µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.161247476Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.162251045Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.004138ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.164150569Z level=info msg="Executing migration" id="update group index for alert rules"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.164490689Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=337.26µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.166095135Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.166285921Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=191.276µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.167802824Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.168150614Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=347.59µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.169852253Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.176249836Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.397653ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.178053608Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.183899366Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=5.848728ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.185704658Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.186610103Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=905.515µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.188279752Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.261601465Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.310952ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.263696394Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.264727814Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.03146ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.266810564Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.267697749Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=886.925µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.269669666Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec  7 04:44:21 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.292276654Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=22.601998ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.294456116Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec  7 04:44:21 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.301671563Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.214787ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.303363662Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.303661631Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=300.839µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.305720299Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.305855273Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=136.684µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.307761248Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.308267782Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=508.334µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.313277596Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.313458031Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=180.835µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.31514255Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.315353546Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=211.466µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.317118757Z level=info msg="Executing migration" id="create folder table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.318094045Z level=info msg="Migration successfully executed" id="create folder table" duration=973.009µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.319763462Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.320942226Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.178364ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.323049487Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.324119097Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.06943ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.326385872Z level=info msg="Executing migration" id="Update folder title length"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.326408283Z level=info msg="Migration successfully executed" id="Update folder title length" duration=23.291µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.328984527Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.330035527Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.05058ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.332087096Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.334083193Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.996207ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.336364208Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.337541092Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.174734ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.339263171Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.339782177Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=519.326µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.341867996Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.342387142Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=519.396µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.345148971Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.3472378Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.089889ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.349765153Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.351884464Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.119301ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.354280082Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.356138025Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.857953ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.358456952Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.360665375Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.203723ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.363057674Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.36499703Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.939606ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.367164081Z level=info msg="Executing migration" id="create anon_device table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.368794919Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.629588ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.371070994Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.373320129Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.248445ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.37582485Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.377707714Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.882314ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.379974529Z level=info msg="Executing migration" id="create signing_key table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.381669308Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.694499ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.384630103Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.386750433Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.12071ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.389242445Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.391288194Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.047989ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.393464316Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.393925179Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=461.633µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.396403461Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.410915207Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=14.511166ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.413086779Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.413990675Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=905.196µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.41589942Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.41732256Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.4228ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.419239725Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.420633095Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.39364ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.422418396Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.423444626Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.026329ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.425298989Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.426567806Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.268467ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.428324956Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.429374777Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.04973ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.431227899Z level=info msg="Executing migration" id="create sso_setting table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.43265258Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.426441ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.435332157Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.436248893Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=917.636µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.438336853Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.43856676Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=230.457µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.440774573Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.440832285Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=60.892µs
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.442913424Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.450872962Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.958468ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.453296952Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.461442706Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=8.145284ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.463396822Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.463731292Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=333.92µs
Dec  7 04:44:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 1 active+clean+scrubbing, 4 peering, 332 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 73 op/s; 261 B/s, 9 objects/s recovering
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=migrator t=2025-12-07T09:44:21.465630766Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.04797978s
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore t=2025-12-07T09:44:21.466914633Z level=info msg="Created default organization"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=secrets t=2025-12-07T09:44:21.468788797Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=plugin.store t=2025-12-07T09:44:21.500119185Z level=info msg="Loading plugins..."
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=local.finder t=2025-12-07T09:44:21.592267868Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=plugin.store t=2025-12-07T09:44:21.592370641Z level=info msg="Plugins loaded" count=55 duration=92.252536ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=query_data t=2025-12-07T09:44:21.594960426Z level=info msg="Query Service initialization"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=live.push_http t=2025-12-07T09:44:21.598792375Z level=info msg="Live Push Gateway initialization"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.migration t=2025-12-07T09:44:21.602678006Z level=info msg=Starting
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.migration t=2025-12-07T09:44:21.603067128Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.migration orgID=1 t=2025-12-07T09:44:21.603466899Z level=info msg="Migrating alerts for organisation"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.migration orgID=1 t=2025-12-07T09:44:21.604254762Z level=info msg="Alerts found to migrate" alerts=0
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.migration t=2025-12-07T09:44:21.60590476Z level=info msg="Completed alerting migration"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.state.manager t=2025-12-07T09:44:21.631070721Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=infra.usagestats.collector t=2025-12-07T09:44:21.633429208Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=provisioning.datasources t=2025-12-07T09:44:21.634763117Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=provisioning.alerting t=2025-12-07T09:44:21.647669926Z level=info msg="starting to provision alerting"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=provisioning.alerting t=2025-12-07T09:44:21.647699928Z level=info msg="finished to provision alerting"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=grafanaStorageLogger t=2025-12-07T09:44:21.647872063Z level=info msg="Storage starting"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.state.manager t=2025-12-07T09:44:21.648909233Z level=info msg="Warming state cache for startup"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=provisioning.dashboard t=2025-12-07T09:44:21.649419847Z level=info msg="starting to provision dashboards"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.multiorg.alertmanager t=2025-12-07T09:44:21.649630343Z level=info msg="Starting MultiOrg Alertmanager"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=http.server t=2025-12-07T09:44:21.65367636Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=http.server t=2025-12-07T09:44:21.654179384Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.state.manager t=2025-12-07T09:44:21.677665977Z level=info msg="State cache has been initialized" states=0 duration=28.736444ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ngalert.scheduler t=2025-12-07T09:44:21.677703048Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ticker t=2025-12-07T09:44:21.677749209Z level=info msg=starting first_tick=2025-12-07T09:44:30Z
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=plugins.update.checker t=2025-12-07T09:44:21.731369517Z level=info msg="Update check succeeded" duration=83.454263ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=grafana.update.checker t=2025-12-07T09:44:21.732315655Z level=info msg="Update check succeeded" duration=83.905367ms
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.746275345Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.753728319Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.757720294Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.768640967Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.813141873Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.823961573Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:44:21.835851035Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=grafana-apiserver t=2025-12-07T09:44:21.929064278Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  7 04:44:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=grafana-apiserver t=2025-12-07T09:44:21.929692846Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:21 np0005549474 ceph-mon[74516]: Deploying daemon haproxy.rgw.default.compute-2.soidop on compute-2
Dec  7 04:44:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=provisioning.dashboard t=2025-12-07T09:44:22.018235955Z level=info msg="finished to provision dashboards"
Dec  7 04:44:22 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec  7 04:44:22 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec  7 04:44:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:22 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:22 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  7 04:44:22 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 04:44:22 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 04:44:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  7 04:44:22 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  7 04:44:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:22.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:44:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:23 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa0003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.149228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663149261, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1040, "num_deletes": 251, "total_data_size": 1278366, "memory_usage": 1299832, "flush_reason": "Manual Compaction"}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663165792, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1211916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6881, "largest_seqno": 7920, "table_properties": {"data_size": 1206500, "index_size": 2620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15161, "raw_average_key_size": 22, "raw_value_size": 1194170, "raw_average_value_size": 1735, "num_data_blocks": 116, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100638, "oldest_key_time": 1765100638, "file_creation_time": 1765100663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16605 microseconds, and 3988 cpu microseconds.
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.165832) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1211916 bytes OK
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.165853) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.167592) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.167608) EVENT_LOG_v1 {"time_micros": 1765100663167604, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.167626) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1272767, prev total WAL file size 1273114, number of live WAL files 2.
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.168338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1183KB)], [20(11MB)]
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663168398, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13143869, "oldest_snapshot_seqno": -1}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.qnwhtu on compute-2
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.qnwhtu on compute-2
Dec  7 04:44:23 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec  7 04:44:23 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3041 keys, 11925330 bytes, temperature: kUnknown
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663300856, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11925330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11900581, "index_size": 16064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 77934, "raw_average_key_size": 25, "raw_value_size": 11840188, "raw_average_value_size": 3893, "num_data_blocks": 703, "num_entries": 3041, "num_filter_entries": 3041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765100663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.301112) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11925330 bytes
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.302518) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.2 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(20.7) write-amplify(9.8) OK, records in: 3567, records dropped: 526 output_compression: NoCompression
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.302537) EVENT_LOG_v1 {"time_micros": 1765100663302528, "job": 6, "event": "compaction_finished", "compaction_time_micros": 132534, "compaction_time_cpu_micros": 40043, "output_level": 6, "num_output_files": 1, "total_output_size": 11925330, "num_input_records": 3567, "num_output_records": 3041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663302924, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663305513, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.168236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.305542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.305547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.305549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.305551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:44:23.305553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:44:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 1 active+clean+scrubbing, 4 peering, 332 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:44:23 np0005549474 ceph-mon[74516]: Deploying daemon keepalived.rgw.default.compute-2.qnwhtu on compute-2
Dec  7 04:44:24 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec  7 04:44:24 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec  7 04:44:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:24 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffab0004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:24 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:24.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  7 04:44:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:25.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 04:44:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:25 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.xnnorz on compute-0
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.xnnorz on compute-0
Dec  7 04:44:25 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec  7 04:44:25 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec  7 04:44:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v89: 337 pgs: 1 active+clean+scrubbing, 4 peering, 332 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.618470693 +0000 UTC m=+0.044766725 container create 86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_liskov, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Dec  7 04:44:25 np0005549474 systemd[1]: Started libpod-conmon-86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1.scope.
Dec  7 04:44:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.597263945 +0000 UTC m=+0.023559997 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.700035612 +0000 UTC m=+0.126331664 container init 86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_liskov, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived)
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.707140867 +0000 UTC m=+0.133436899 container start 86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_liskov, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, architecture=x86_64, release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20)
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.710311358 +0000 UTC m=+0.136607410 container attach 86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_liskov, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=)
Dec  7 04:44:25 np0005549474 wizardly_liskov[100217]: 0 0
Dec  7 04:44:25 np0005549474 systemd[1]: libpod-86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1.scope: Deactivated successfully.
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.714401814 +0000 UTC m=+0.140697846 container died 86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_liskov, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  7 04:44:25 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2bc298a66deb728d5443447a5c4a1948449e6bb6cb39843df4e3f5bba05dde21-merged.mount: Deactivated successfully.
Dec  7 04:44:25 np0005549474 podman[100201]: 2025-12-07 09:44:25.750577342 +0000 UTC m=+0.176873374 container remove 86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_liskov, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9)
Dec  7 04:44:25 np0005549474 systemd[1]: libpod-conmon-86de278ffbaeaf6c13388f16397436810bb9c4d56481551613c98627e4a681e1.scope: Deactivated successfully.
Dec  7 04:44:25 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:25 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:25 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: Deploying daemon keepalived.rgw.default.compute-0.xnnorz on compute-0
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  7 04:44:26 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:26 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:26 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:26 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec  7 04:44:26 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec  7 04:44:26 np0005549474 systemd-logind[796]: New session 37 of user zuul.
Dec  7 04:44:26 np0005549474 systemd[1]: Started Session 37 of User zuul.
Dec  7 04:44:26 np0005549474 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.xnnorz for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:26 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:26 np0005549474 podman[100440]: 2025-12-07 09:44:26.620007708 +0000 UTC m=+0.043618582 container create 720ae7455a9a3899f7447822b12c4a69088dc64f6a588132582771b936658f76 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph)
Dec  7 04:44:26 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411ef0b2a1d5b0ba475ca6b1496c45e69545051b5d16c08e676cfe1cd9c44d2e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:26 np0005549474 podman[100440]: 2025-12-07 09:44:26.676014714 +0000 UTC m=+0.099625618 container init 720ae7455a9a3899f7447822b12c4a69088dc64f6a588132582771b936658f76 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Dec  7 04:44:26 np0005549474 podman[100440]: 2025-12-07 09:44:26.680838443 +0000 UTC m=+0.104449317 container start 720ae7455a9a3899f7447822b12c4a69088dc64f6a588132582771b936658f76 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  7 04:44:26 np0005549474 bash[100440]: 720ae7455a9a3899f7447822b12c4a69088dc64f6a588132582771b936658f76
Dec  7 04:44:26 np0005549474 podman[100440]: 2025-12-07 09:44:26.602072044 +0000 UTC m=+0.025682938 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 04:44:26 np0005549474 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.xnnorz for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:26 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Starting VRRP child process, pid=4
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: Startup complete
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:26 2025: (VI_0) Entering BACKUP STATE
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: (VI_0) Entering BACKUP STATE (init)
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev faf3167b-877d-4dc0-9365-613025671afb (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  7 04:44:26 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event faf3167b-877d-4dc0-9365-613025671afb (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 04:44:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:26 2025: VRRP_Script(check_backend) succeeded
Dec  7 04:44:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:26 np0005549474 ceph-mgr[74811]: [progress INFO root] update: starting ev 94fa54b4-68ce-4e17-b89e-81e177d4640a (Updating prometheus deployment (+1 -> 1))
Dec  7 04:44:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:26.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:27 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec  7 04:44:27 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec  7 04:44:27 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:27 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:27 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:27 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:27.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:27 np0005549474 python3.9[100540]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:44:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:27 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:27 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  7 04:44:27 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  7 04:44:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:27 2025: (VI_0) Entering MASTER STATE
Dec  7 04:44:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 4 peering, 333 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 2 keys/s, 7 objects/s recovering
Dec  7 04:44:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:28 np0005549474 ceph-mon[74516]: Deploying daemon prometheus.compute-0 on compute-0
Dec  7 04:44:28 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec  7 04:44:28 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec  7 04:44:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:28 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze[98785]: Sun Dec  7 09:44:28 2025: (VI_0) received an invalid passwd!
Dec  7 04:44:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:28 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec  7 04:44:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:28 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:28 np0005549474 python3.9[100922]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:44:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:28.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:29.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:29 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa940016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:29 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec  7 04:44:29 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec  7 04:44:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 4 peering, 333 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 185 B/s, 1 keys/s, 5 objects/s recovering
Dec  7 04:44:30 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec  7 04:44:30 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec  7 04:44:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-rgw-default-compute-0-xnnorz[100482]: Sun Dec  7 09:44:30 2025: (VI_0) Entering MASTER STATE
Dec  7 04:44:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:30 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:30 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:30 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 28 completed events
Dec  7 04:44:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:44:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:30.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:44:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:31.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:31 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:31 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:31 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec  7 04:44:31 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec  7 04:44:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v93: 337 pgs: 337 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 101 B/s, 1 keys/s, 1 objects/s recovering
Dec  7 04:44:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  7 04:44:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 04:44:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  7 04:44:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 75 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=75) [0] r=0 lpr=75 pi=[64,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 75 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=75) [0] r=0 lpr=75 pi=[64,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 75 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 75 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=75) [0] r=0 lpr=75 pi=[65,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa940016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:32 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  7 04:44:32 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  7 04:44:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:44:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[64,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[64,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[64,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[64,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 76 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=76) [0]/[2] r=-1 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.063525305 +0000 UTC m=+5.475774772 volume create f5d5a9ea9cd9c407e8d61290f24db2a405ea4d4ce238ee06cfe5254cbcbf8412
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.074349914 +0000 UTC m=+5.486599381 container create 349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2 (image=quay.io/prometheus/prometheus:v2.51.0, name=zealous_thompson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:44:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:33.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:44:33 np0005549474 systemd[1]: Started libpod-conmon-349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2.scope.
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.046774014 +0000 UTC m=+5.459023521 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  7 04:44:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ebe2d6e994d8b142a6ccd633b26dc95e799ba436f58bd1048a3dc19acaa12/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.167902168 +0000 UTC m=+5.580151715 container init 349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2 (image=quay.io/prometheus/prometheus:v2.51.0, name=zealous_thompson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.179712627 +0000 UTC m=+5.591962084 container start 349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2 (image=quay.io/prometheus/prometheus:v2.51.0, name=zealous_thompson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 zealous_thompson[101130]: 65534 65534
Dec  7 04:44:33 np0005549474 systemd[1]: libpod-349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2.scope: Deactivated successfully.
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.196415956 +0000 UTC m=+5.608665503 container attach 349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2 (image=quay.io/prometheus/prometheus:v2.51.0, name=zealous_thompson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.196947251 +0000 UTC m=+5.609196718 container died 349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2 (image=quay.io/prometheus/prometheus:v2.51.0, name=zealous_thompson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:33 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c001b40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-924ebe2d6e994d8b142a6ccd633b26dc95e799ba436f58bd1048a3dc19acaa12-merged.mount: Deactivated successfully.
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.248726816 +0000 UTC m=+5.660976273 container remove 349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2 (image=quay.io/prometheus/prometheus:v2.51.0, name=zealous_thompson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 podman[100659]: 2025-12-07 09:44:33.253635716 +0000 UTC m=+5.665885193 volume remove f5d5a9ea9cd9c407e8d61290f24db2a405ea4d4ce238ee06cfe5254cbcbf8412
Dec  7 04:44:33 np0005549474 systemd[1]: libpod-conmon-349a4844765455135980fa54d9c3ebdf8ff2cbe1302a54fa66574d28774ce7a2.scope: Deactivated successfully.
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.322622425 +0000 UTC m=+0.039151074 volume create 727dfc9e50da722f22853ad84c5f82501c8927a7120bab835e54586bb52668c3
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.329274556 +0000 UTC m=+0.045803205 container create b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 systemd[1]: Started libpod-conmon-b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384.scope.
Dec  7 04:44:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f458556451a8a0062073358f083cb5a9cc46490f896c9ca20840787b0149b22/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.306922045 +0000 UTC m=+0.023450714 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.403566997 +0000 UTC m=+0.120095666 container init b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.407997445 +0000 UTC m=+0.124526124 container start b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 mystifying_diffie[101165]: 65534 65534
Dec  7 04:44:33 np0005549474 systemd[1]: libpod-b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384.scope: Deactivated successfully.
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.41134134 +0000 UTC m=+0.127869989 container attach b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.411519925 +0000 UTC m=+0.128048574 container died b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0f458556451a8a0062073358f083cb5a9cc46490f896c9ca20840787b0149b22-merged.mount: Deactivated successfully.
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.453092138 +0000 UTC m=+0.169620787 container remove b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:33 np0005549474 podman[101148]: 2025-12-07 09:44:33.457940286 +0000 UTC m=+0.174468935 volume remove 727dfc9e50da722f22853ad84c5f82501c8927a7120bab835e54586bb52668c3
Dec  7 04:44:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 337 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 04:44:33 np0005549474 systemd[1]: libpod-conmon-b702d2ce3c522e2063b001dbc3dfbac96dc9f1dab549ba8ee735e1e697658384.scope: Deactivated successfully.
Dec  7 04:44:33 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:33 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:33 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:33 np0005549474 systemd[1]: Reloading.
Dec  7 04:44:33 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:44:33 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  7 04:44:33 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77) [0] r=0 lpr=77 pi=[63,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[6.6( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77) [0] r=0 lpr=77 pi=[63,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.988593102s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 active pruub 242.830673218s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.988561630s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 242.830673218s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.988087654s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 active pruub 242.830734253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.988059998s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 242.830734253s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.987990379s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 active pruub 242.830810547s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.987961769s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 242.830810547s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.988025665s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 active pruub 242.830886841s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:33 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 77 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=11.987987518s) [1] r=-1 lpr=77 pi=[66,77)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 242.830886841s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:34 np0005549474 systemd[1]: Starting Ceph prometheus.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:34 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:34 np0005549474 podman[101312]: 2025-12-07 09:44:34.458849794 +0000 UTC m=+0.058247542 container create 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a10dd6039b0d4fc11525da0d5c408dfe13f7cc7e82bfb4bd5d2fc64e6e9173/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a10dd6039b0d4fc11525da0d5c408dfe13f7cc7e82bfb4bd5d2fc64e6e9173/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:34 np0005549474 podman[101312]: 2025-12-07 09:44:34.51660505 +0000 UTC m=+0.116002838 container init 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:34 np0005549474 podman[101312]: 2025-12-07 09:44:34.42524635 +0000 UTC m=+0.024644058 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  7 04:44:34 np0005549474 podman[101312]: 2025-12-07 09:44:34.526993938 +0000 UTC m=+0.126391686 container start 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:34 np0005549474 bash[101312]: 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc
Dec  7 04:44:34 np0005549474 systemd[1]: Started Ceph prometheus.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.564Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.564Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.564Z caller=main.go:623 level=info host_details="(Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 x86_64 compute-0 (none))"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.564Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.564Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.566Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.567Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.571Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.571Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.574Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.574Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.42µs
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.574Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.575Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.575Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=37.681µs wal_replay_duration=527.565µs wbl_replay_duration=240ns total_replay_duration=650.658µs
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.578Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.578Z caller=main.go:1153 level=info msg="TSDB started"
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.578Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.625Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=46.664309ms db_storage=1.781µs remote_storage=1.99µs web_handler=850ns query_engine=1.2µs scrape=4.20228ms scrape_sd=333.709µs notify=28.861µs notify_sd=21.441µs rules=41.120329ms tracing=15.33µs
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.625Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0[101327]: ts=2025-12-07T09:44:34.625Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:34 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa940016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  7 04:44:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:34.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=76/64 les/c/f=77/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=76/65 les/c/f=77/66/0 sis=78) [0] r=0 lpr=78 pi=[65,78)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=76/65 les/c/f=77/66/0 sis=78) [0] r=0 lpr=78 pi=[65,78)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.5( v 77'1108 (0'0,77'1108] local-lis/les=0/0 n=6 ec=57/42 lis/c=76/65 les/c/f=77/66/0 sis=78) [0] r=0 lpr=78 pi=[65,78)/1 luod=0'0 crt=66'1104 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.5( v 77'1108 (0'0,77'1108] local-lis/les=0/0 n=6 ec=57/42 lis/c=76/65 les/c/f=77/66/0 sis=78) [0] r=0 lpr=78 pi=[65,78)/1 crt=66'1104 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=76/64 les/c/f=77/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=76/64 les/c/f=77/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=76/64 les/c/f=77/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[6.6( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=77/78 n=2 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77) [0] r=0 lpr=77 pi=[63,77)/1 crt=46'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:35 np0005549474 ceph-mgr[74811]: [progress INFO root] complete: finished ev 94fa54b4-68ce-4e17-b89e-81e177d4640a (Updating prometheus deployment (+1 -> 1))
Dec  7 04:44:35 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 94fa54b4-68ce-4e17-b89e-81e177d4640a (Updating prometheus deployment (+1 -> 1)) in 8 seconds
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 78 pg[6.e( v 46'39 lc 45'19 (0'0,46'39] local-lis/les=77/78 n=1 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77) [0] r=0 lpr=77 pi=[63,77)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  7 04:44:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:35.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  7 04:44:35 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  7 04:44:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:35 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v99: 337 pgs: 337 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 04:44:35 np0005549474 ceph-mgr[74811]: [progress INFO root] Writing back 29 completed events
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 04:44:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:35 np0005549474 ceph-mgr[74811]: [progress INFO root] Completed event 84ec1a42-d96d-4f90-bdf3-20a6be840704 (Global Recovery Event) in 20 seconds
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=76/64 les/c/f=77/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=76/65 les/c/f=77/66/0 sis=78) [0] r=0 lpr=78 pi=[65,78)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=76/64 les/c/f=77/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.5( v 77'1108 (0'0,77'1108] local-lis/les=78/79 n=6 ec=57/42 lis/c=76/65 les/c/f=77/66/0 sis=78) [0] r=0 lpr=78 pi=[65,78)/1 crt=77'1108 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 79 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[66,78)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.dotugk(active, since 106s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:44:36 np0005549474 systemd[1]: session-35.scope: Deactivated successfully.
Dec  7 04:44:36 np0005549474 systemd[1]: session-35.scope: Consumed 49.714s CPU time.
Dec  7 04:44:36 np0005549474 systemd-logind[796]: Session 35 logged out. Waiting for processes to exit.
Dec  7 04:44:36 np0005549474 systemd-logind[796]: Removed session 35.
Dec  7 04:44:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setuser ceph since I am not root
Dec  7 04:44:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ignoring --setgroup ceph since I am not root
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: pidfile_write: ignore empty --pid-file
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'alerts'
Dec  7 04:44:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:36.327+0000 7f2cfe369140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'balancer'
Dec  7 04:44:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:36.411+0000 7f2cfe369140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 04:44:36 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'cephadm'
Dec  7 04:44:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:36 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c001b40 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:36 np0005549474 systemd-logind[796]: Session 37 logged out. Waiting for processes to exit.
Dec  7 04:44:36 np0005549474 systemd[1]: session-37.scope: Deactivated successfully.
Dec  7 04:44:36 np0005549474 systemd[1]: session-37.scope: Consumed 8.050s CPU time.
Dec  7 04:44:36 np0005549474 systemd-logind[796]: Removed session 37.
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 04:44:36 np0005549474 ceph-mon[74516]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  7 04:44:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:36 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:36.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  7 04:44:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  7 04:44:37 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.011715889s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 56'1095 active pruub 248.910964966s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.011639595s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 248.910964966s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.011548042s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 56'1095 active pruub 248.911346436s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.011455536s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 248.911346436s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.010687828s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 56'1095 active pruub 248.911285400s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.010647774s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 248.911285400s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.010423660s) [1] async=[1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 56'1095 active pruub 248.911178589s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 80 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80 pruub=15.010366440s) [1] r=-1 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 248.911178589s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:37.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'crash'
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:37.195+0000 7f2cfe369140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'dashboard'
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:37 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'devicehealth'
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:37.804+0000 7f2cfe369140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 04:44:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]:  from numpy import show_config as show_numpy_config
Dec  7 04:44:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:37.972+0000 7f2cfe369140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 04:44:37 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'influx'
Dec  7 04:44:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  7 04:44:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:38.042+0000 7f2cfe369140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'insights'
Dec  7 04:44:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  7 04:44:38 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'iostat'
Dec  7 04:44:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:38.186+0000 7f2cfe369140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'k8sevents'
Dec  7 04:44:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:38 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'localpool'
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 04:44:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:38 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'mirroring'
Dec  7 04:44:38 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'nfs'
Dec  7 04:44:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  7 04:44:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  7 04:44:39 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  7 04:44:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:39.190+0000 7f2cfe369140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'orchestrator'
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:39 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:39.421+0000 7f2cfe369140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:39.499+0000 7f2cfe369140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'osd_support'
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:39.562+0000 7f2cfe369140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:39.647+0000 7f2cfe369140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'progress'
Dec  7 04:44:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:39.722+0000 7f2cfe369140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 04:44:39 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'prometheus'
Dec  7 04:44:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  7 04:44:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  7 04:44:40 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  7 04:44:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:40.080+0000 7f2cfe369140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rbd_support'
Dec  7 04:44:40 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec  7 04:44:40 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec  7 04:44:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:40.175+0000 7f2cfe369140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'restful'
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rgw'
Dec  7 04:44:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:40 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:40.593+0000 7f2cfe369140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 04:44:40 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'rook'
Dec  7 04:44:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:40 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:44:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:44:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:41 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec  7 04:44:41 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.167+0000 7f2cfe369140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'selftest'
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:41 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.236+0000 7f2cfe369140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'snap_schedule'
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.312+0000 7f2cfe369140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'stats'
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'status'
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.460+0000 7f2cfe369140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telegraf'
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.531+0000 7f2cfe369140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'telemetry'
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.687+0000 7f2cfe369140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 04:44:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:41.913+0000 7f2cfe369140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 04:44:41 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'volumes'
Dec  7 04:44:42 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.174+0000 7f2cfe369140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr[py] Loading python module 'zabbix'
Dec  7 04:44:42 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.251+0000 7f2cfe369140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug restarted
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ntknug started
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dotugk restarted
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dotugk
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: ms_deliver_dispatch: unhandled message 0x55a12e675860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.dotugk(active, starting, since 0.0326677s), standbys: compute-1.buauyv, compute-2.ntknug
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map Activating!
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr handle_mgr_map I am now activating
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qgzqbk"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qgzqbk"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 all = 0
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.rxtsyx"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rxtsyx"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 all = 0
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ihigcc"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ihigcc"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 all = 0
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 all = 1
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.dotugk is now available
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: balancer
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Starting
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:44:42
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: cephadm
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: crash
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: dashboard
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: devicehealth
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO sso] Loading SSO DB version=1
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: iostat
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: nfs
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: orchestrator
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: pg_autoscaler
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Starting
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: progress
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: prometheus
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO root] Cache enabled
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO root] starting metric collection thread
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO root] Starting engine...
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:44:42] ENGINE Bus STARTING
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:44:42] ENGINE Bus STARTING
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: CherryPy Checker:
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: The Application mounted at '' has an empty config.
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [progress INFO root] Loading...
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f2c7be28100>, <progress.module.GhostEvent object at 0x7f2c7be24df0>, <progress.module.GhostEvent object at 0x7f2c7be24e80>, <progress.module.GhostEvent object at 0x7f2c7be24eb0>, <progress.module.GhostEvent object at 0x7f2c7be24c40>, <progress.module.GhostEvent object at 0x7f2c7be2d520>, <progress.module.GhostEvent object at 0x7f2c7be2cfd0>, <progress.module.GhostEvent object at 0x7f2c7be2cfa0>, <progress.module.GhostEvent object at 0x7f2c7be2cf40>, <progress.module.GhostEvent object at 0x7f2c7be2cf10>, <progress.module.GhostEvent object at 0x7f2c7be2cee0>, <progress.module.GhostEvent object at 0x7f2c7be2ceb0>, <progress.module.GhostEvent object at 0x7f2c7be2ce80>, <progress.module.GhostEvent object at 0x7f2c7be2ce50>, <progress.module.GhostEvent object at 0x7f2c7be2ce20>, <progress.module.GhostEvent object at 0x7f2c7be2cdf0>, <progress.module.GhostEvent object at 0x7f2c7be2cdc0>, <progress.module.GhostEvent object at 0x7f2c7be2cd90>, <progress.module.GhostEvent object at 0x7f2c7be2cd60>, <progress.module.GhostEvent object at 0x7f2c7be2cd30>, <progress.module.GhostEvent object at 0x7f2c7be2cd00>, <progress.module.GhostEvent object at 0x7f2c7be2ccd0>, <progress.module.GhostEvent object at 0x7f2c7be2cca0>, <progress.module.GhostEvent object at 0x7f2c7be2cc70>, <progress.module.GhostEvent object at 0x7f2c7be2cc40>, <progress.module.GhostEvent object at 0x7f2c7be2cc10>, <progress.module.GhostEvent object at 0x7f2c7be2cbe0>, <progress.module.GhostEvent object at 0x7f2c7be2cbb0>, <progress.module.GhostEvent object at 0x7f2c7be2cb80>] historic events
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] recovery thread starting
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] starting setup
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: rbd_support
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: restful
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [restful WARNING root] server not running: no certificate configured
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: status
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: telemetry
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:42 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] PerfHandler: starting
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: mgr load Constructed class from module: volumes
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.484+0000 7f2c6bab8640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.487+0000 7f2c692b3640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.487+0000 7f2c692b3640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.487+0000 7f2c692b3640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.487+0000 7f2c692b3640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T09:44:42.487+0000 7f2c692b3640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: client.0 error registering admin socket command: (17) File exists
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TaskHandler: starting
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"} v 0)
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] setup complete
Dec  7 04:44:42 np0005549474 systemd-logind[796]: New session 38 of user ceph-admin.
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:44:42] ENGINE Serving on http://:::9283
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:44:42] ENGINE Serving on http://:::9283
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:44:42] ENGINE Bus STARTED
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:44:42] ENGINE Bus STARTED
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [prometheus INFO root] Engine started.
Dec  7 04:44:42 np0005549474 systemd[1]: Started Session 38 of User ceph-admin.
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv restarted
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.buauyv started
Dec  7 04:44:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:42 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:42 np0005549474 ceph-mgr[74811]: [dashboard INFO dashboard.module] Engine started.
Dec  7 04:44:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:43 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  7 04:44:43 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: Active manager daemon compute-0.dotugk restarted
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: Activating manager daemon compute-0.dotugk
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: Manager daemon compute-0.dotugk is now available
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec  7 04:44:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:43 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:43 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.dotugk(active, since 1.05586s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:44:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:43 np0005549474 podman[101706]: 2025-12-07 09:44:43.387924207 +0000 UTC m=+0.069434553 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:44:43 np0005549474 podman[101706]: 2025-12-07 09:44:43.485631099 +0000 UTC m=+0.167141425 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 04:44:43 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:44:43] ENGINE Bus STARTING
Dec  7 04:44:43 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:44:43] ENGINE Bus STARTING
Dec  7 04:44:43 np0005549474 podman[101825]: 2025-12-07 09:44:43.90231205 +0000 UTC m=+0.045297140 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:43 np0005549474 podman[101825]: 2025-12-07 09:44:43.912506772 +0000 UTC m=+0.055491872 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:43 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:44:43] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:44:43 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:44:43] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:44:44] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:44:44] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:44:44] ENGINE Bus STARTED
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:44:44] ENGINE Bus STARTED
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: [cephadm INFO cherrypy.error] [07/Dec/2025:09:44:44] ENGINE Client ('192.168.122.100', 35992) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : [07/Dec/2025:09:44:44] ENGINE Client ('192.168.122.100', 35992) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:44:44 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec  7 04:44:44 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec  7 04:44:44 np0005549474 podman[101939]: 2025-12-07 09:44:44.185704818 +0000 UTC m=+0.045230798 container exec 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  7 04:44:44 np0005549474 podman[101939]: 2025-12-07 09:44:44.197464275 +0000 UTC m=+0.056990255 container exec_died 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  7 04:44:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 04:44:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  7 04:44:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 04:44:44 np0005549474 podman[102002]: 2025-12-07 09:44:44.404913765 +0000 UTC m=+0.055760350 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:44:44 np0005549474 podman[102002]: 2025-12-07 09:44:44.410839715 +0000 UTC m=+0.061686300 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:44:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:44 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:44 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 04:44:44 np0005549474 podman[102077]: 2025-12-07 09:44:44.634606373 +0000 UTC m=+0.054478543 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, vendor=Red Hat, Inc., name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.expose-services=)
Dec  7 04:44:44 np0005549474 podman[102077]: 2025-12-07 09:44:44.641948144 +0000 UTC m=+0.061820304 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  7 04:44:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:44 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:44.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:45.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:45 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec  7 04:44:45 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec  7 04:44:45 np0005549474 podman[102142]: 2025-12-07 09:44:45.187448922 +0000 UTC m=+0.396666651 container exec 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  7 04:44:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 85 pg[6.8( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=85 pruub=13.986022949s) [1] r=-1 lpr=85 pi=[53,85)/1 crt=46'39 lcod 0'0 mlcod 0'0 active pruub 256.050811768s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 85 pg[6.8( v 46'39 (0'0,46'39] local-lis/les=53/54 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=85 pruub=13.985979080s) [1] r=-1 lpr=85 pi=[53,85)/1 crt=46'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 256.050811768s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:45 np0005549474 podman[102142]: 2025-12-07 09:44:45.224665372 +0000 UTC m=+0.433883081 container exec_died 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 85 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=85) [0] r=0 lpr=85 pi=[57,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:45 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 85 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=85) [0] r=0 lpr=85 pi=[57,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:45 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:44:43] ENGINE Bus STARTING
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:44:43] ENGINE Serving on http://192.168.122.100:8765
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:44:44] ENGINE Serving on https://192.168.122.100:7150
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:44:44] ENGINE Bus STARTED
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: [07/Dec/2025:09:44:44] ENGINE Client ('192.168.122.100', 35992) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.dotugk(active, since 3s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:44:45 np0005549474 podman[102216]: 2025-12-07 09:44:45.418491935 +0000 UTC m=+0.045247085 container exec 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:45 np0005549474 podman[102216]: 2025-12-07 09:44:45.603606043 +0000 UTC m=+0.230361193 container exec_died 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:44:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:44:46 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec  7 04:44:46 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:44:46 np0005549474 podman[102326]: 2025-12-07 09:44:46.163981563 +0000 UTC m=+0.217835237 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:44:46 np0005549474 podman[102326]: 2025-12-07 09:44:46.200631266 +0000 UTC m=+0.254484960 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v6: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:44:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:46 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 86 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:46 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 86 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  7 04:44:46 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 86 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:46 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 86 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=-1 lpr=86 pi=[57,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:46 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:46.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:47 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  7 04:44:47 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  7 04:44:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:47.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:47 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.dotugk(active, since 5s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  7 04:44:47 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 87 pg[6.9( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/62/0 sis=87) [0] r=0 lpr=87 pi=[61,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 88 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88) [0] r=0 lpr=88 pi=[57,88)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 88 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88) [0] r=0 lpr=88 pi=[57,88)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 88 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88) [0] r=0 lpr=88 pi=[57,88)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 88 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88) [0] r=0 lpr=88 pi=[57,88)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:48 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 88 pg[6.9( v 46'39 (0'0,46'39] local-lis/les=87/88 n=1 ec=53/21 lis/c=61/61 les/c/f=62/62/0 sis=87) [0] r=0 lpr=87 pi=[61,87)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v10: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:48 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  7 04:44:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:48 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:48 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec  7 04:44:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:49.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  7 04:44:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:49 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 89 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=12.783805847s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=56'1095 mlcod 0'0 active pruub 258.985321045s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 89 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=12.783729553s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 258.985321045s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 89 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=12.783002853s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=56'1095 mlcod 0'0 active pruub 258.985321045s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 89 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=89 pruub=12.782910347s) [1] r=-1 lpr=89 pi=[66,89)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 258.985321045s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 89 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=88/89 n=6 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88) [0] r=0 lpr=88 pi=[57,88)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:49 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 89 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=88/89 n=5 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88) [0] r=0 lpr=88 pi=[57,88)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  7 04:44:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:44:49] "GET /metrics HTTP/1.1" 200 46581 "" "Prometheus/2.51.0"
Dec  7 04:44:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:44:49] "GET /metrics HTTP/1.1" 200 46581 "" "Prometheus/2.51.0"
Dec  7 04:44:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  7 04:44:50 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v12: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 2 objects/s recovering
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  7 04:44:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:50 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:50 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 90 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 90 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 90 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 90 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=0 lpr=90 pi=[66,90)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:44:50 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec  7 04:44:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:51 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec  7 04:44:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:51.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:44:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:51 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.757374946 +0000 UTC m=+0.041771793 container create 241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  7 04:44:51 np0005549474 systemd[92462]: Starting Mark boot as successful...
Dec  7 04:44:51 np0005549474 systemd[92462]: Finished Mark boot as successful.
Dec  7 04:44:51 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 91 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=5 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[66,90)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:51 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 91 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=6 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[66,90)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:44:51 np0005549474 systemd[1]: Started libpod-conmon-241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb.scope.
Dec  7 04:44:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.819950457 +0000 UTC m=+0.104347324 container init 241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.829777224 +0000 UTC m=+0.114174071 container start 241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldstine, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.735514096 +0000 UTC m=+0.019910973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.833602207 +0000 UTC m=+0.117999054 container attach 241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:44:51 np0005549474 sad_goldstine[103522]: 167 167
Dec  7 04:44:51 np0005549474 systemd[1]: libpod-241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb.scope: Deactivated successfully.
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.835041049 +0000 UTC m=+0.119437936 container died 241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldstine, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:44:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b115cd946e017551b47015d8187e1fd8826895fdf5e4393c178648c6856f9e7d-merged.mount: Deactivated successfully.
Dec  7 04:44:51 np0005549474 podman[103505]: 2025-12-07 09:44:51.873496174 +0000 UTC m=+0.157893021 container remove 241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_goldstine, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:44:51 np0005549474 systemd[1]: libpod-conmon-241e80d22ad3d8fc3e689350fb445520063deed5bb4fb247e5b0538bca437dbb.scope: Deactivated successfully.
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.048583999 +0000 UTC m=+0.046666347 container create ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 04:44:52 np0005549474 systemd[1]: Started libpod-conmon-ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c.scope.
Dec  7 04:44:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6f013826cbeeca2f3b2e2361a4062458f9ba7ba4421f401adb9e557c5aa749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6f013826cbeeca2f3b2e2361a4062458f9ba7ba4421f401adb9e557c5aa749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6f013826cbeeca2f3b2e2361a4062458f9ba7ba4421f401adb9e557c5aa749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6f013826cbeeca2f3b2e2361a4062458f9ba7ba4421f401adb9e557c5aa749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6f013826cbeeca2f3b2e2361a4062458f9ba7ba4421f401adb9e557c5aa749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.119076632 +0000 UTC m=+0.117159000 container init ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.02708779 +0000 UTC m=+0.025170158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.129981921 +0000 UTC m=+0.128064269 container start ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.132734382 +0000 UTC m=+0.130816730 container attach ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:44:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v15: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 2 objects/s recovering
Dec  7 04:44:52 np0005549474 systemd-logind[796]: New session 39 of user zuul.
Dec  7 04:44:52 np0005549474 systemd[1]: Started Session 39 of User zuul.
Dec  7 04:44:52 np0005549474 suspicious_ellis[103561]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:44:52 np0005549474 suspicious_ellis[103561]: --> All data devices are unavailable
Dec  7 04:44:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:52 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:52 np0005549474 systemd[1]: libpod-ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c.scope: Deactivated successfully.
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.456082955 +0000 UTC m=+0.454165293 container died ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:44:52 np0005549474 systemd[1]: var-lib-containers-storage-overlay-fe6f013826cbeeca2f3b2e2361a4062458f9ba7ba4421f401adb9e557c5aa749-merged.mount: Deactivated successfully.
Dec  7 04:44:52 np0005549474 podman[103544]: 2025-12-07 09:44:52.510941151 +0000 UTC m=+0.509023499 container remove ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:44:52 np0005549474 systemd[1]: libpod-conmon-ae17161029ec90bde87e5bfa2c38595b5755f338efda5ed7853fac40e933182c.scope: Deactivated successfully.
Dec  7 04:44:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:52 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  7 04:44:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  7 04:44:52 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 92 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=5 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=15.000304222s) [1] async=[1] r=-1 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 56'1095 active pruub 264.631286621s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 92 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=5 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=15.000190735s) [1] r=-1 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 264.631286621s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 92 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=6 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=15.000087738s) [1] async=[1] r=-1 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 56'1095 active pruub 264.631286621s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 92 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=6 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92 pruub=15.000021935s) [1] r=-1 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 264.631286621s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:44:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:52 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  7 04:44:53 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  7 04:44:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:53.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:53 np0005549474 python3.9[103792]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.053083237 +0000 UTC m=+0.038693373 container create 3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rhodes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 04:44:53 np0005549474 systemd[1]: Started libpod-conmon-3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5.scope.
Dec  7 04:44:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:53 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.034534825 +0000 UTC m=+0.020144991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.142660249 +0000 UTC m=+0.128270415 container init 3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.150348684 +0000 UTC m=+0.135958840 container start 3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.153466095 +0000 UTC m=+0.139076261 container attach 3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:44:53 np0005549474 awesome_rhodes[103868]: 167 167
Dec  7 04:44:53 np0005549474 systemd[1]: libpod-3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5.scope: Deactivated successfully.
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.155514566 +0000 UTC m=+0.141124712 container died 3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:44:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f9fc6f9c98bc5811e9cd211979560bad88cebe66b5415c7f54181944488f4f39-merged.mount: Deactivated successfully.
Dec  7 04:44:53 np0005549474 podman[103833]: 2025-12-07 09:44:53.192293242 +0000 UTC m=+0.177903388 container remove 3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:44:53 np0005549474 systemd[1]: libpod-conmon-3c0f1a59d85ec32720eac5a91e6616f9f31a209dbed57383e31bc6bb46584eb5.scope: Deactivated successfully.
Dec  7 04:44:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:53 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.352445939 +0000 UTC m=+0.058118252 container create 533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:44:53 np0005549474 systemd[1]: Started libpod-conmon-533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2.scope.
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.333497654 +0000 UTC m=+0.039169987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:44:53 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c57ebbdf1f5f0bf973c0c38c782149c3c16dfaddd519a96b88a0de3a98e6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c57ebbdf1f5f0bf973c0c38c782149c3c16dfaddd519a96b88a0de3a98e6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c57ebbdf1f5f0bf973c0c38c782149c3c16dfaddd519a96b88a0de3a98e6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c57ebbdf1f5f0bf973c0c38c782149c3c16dfaddd519a96b88a0de3a98e6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.441723052 +0000 UTC m=+0.147395395 container init 533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.451560079 +0000 UTC m=+0.157232392 container start 533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.454463954 +0000 UTC m=+0.160136297 container attach 533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_tharp, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]: {
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:    "0": [
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:        {
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "devices": [
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "/dev/loop3"
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            ],
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "lv_name": "ceph_lv0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "lv_size": "21470642176",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "name": "ceph_lv0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "tags": {
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.cluster_name": "ceph",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.crush_device_class": "",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.encrypted": "0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.osd_id": "0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.type": "block",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.vdo": "0",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:                "ceph.with_tpm": "0"
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            },
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "type": "block",
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:            "vg_name": "ceph_vg0"
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:        }
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]:    ]
Dec  7 04:44:53 np0005549474 awesome_tharp[103962]: }
Dec  7 04:44:53 np0005549474 systemd[1]: libpod-533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2.scope: Deactivated successfully.
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.741818225 +0000 UTC m=+0.447490538 container died 533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:44:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-842c57ebbdf1f5f0bf973c0c38c782149c3c16dfaddd519a96b88a0de3a98e6b-merged.mount: Deactivated successfully.
Dec  7 04:44:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  7 04:44:53 np0005549474 podman[103899]: 2025-12-07 09:44:53.785171163 +0000 UTC m=+0.490843476 container remove 533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:44:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  7 04:44:53 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  7 04:44:53 np0005549474 systemd[1]: libpod-conmon-533ff6f0832580e0d6a11d002222a61dde7066bb2773e9d16ae4cbecfab288b2.scope: Deactivated successfully.
Dec  7 04:44:53 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.12 deep-scrub starts
Dec  7 04:44:53 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.12 deep-scrub ok
Dec  7 04:44:54 np0005549474 python3.9[104136]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:44:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v18: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.34334918 +0000 UTC m=+0.040938589 container create ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_volhard, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:44:54 np0005549474 systemd[1]: Started libpod-conmon-ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327.scope.
Dec  7 04:44:54 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.412770422 +0000 UTC m=+0.110359651 container init ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.323599932 +0000 UTC m=+0.021189181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.419848769 +0000 UTC m=+0.117437988 container start ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_volhard, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.422657881 +0000 UTC m=+0.120247120 container attach ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:44:54 np0005549474 amazing_volhard[104200]: 167 167
Dec  7 04:44:54 np0005549474 systemd[1]: libpod-ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327.scope: Deactivated successfully.
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.427017429 +0000 UTC m=+0.124606668 container died ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 04:44:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:54 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:54 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5f8dd47fd9c841d7ac439905f480afac2328a73ab493cbe59ed9dc84816b33ca-merged.mount: Deactivated successfully.
Dec  7 04:44:54 np0005549474 podman[104183]: 2025-12-07 09:44:54.465778173 +0000 UTC m=+0.163367392 container remove ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_volhard, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:44:54 np0005549474 systemd[1]: libpod-conmon-ee83666cb60a0b794db1e40c900bea78a15ab6553b2261abf1e055a9f022e327.scope: Deactivated successfully.
Dec  7 04:44:54 np0005549474 podman[104248]: 2025-12-07 09:44:54.638249711 +0000 UTC m=+0.047208543 container create 6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_williams, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:44:54 np0005549474 systemd[1]: Started libpod-conmon-6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a.scope.
Dec  7 04:44:54 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61062bcfa925c9329252636d0976d3950b445116d21fdad06c47075d526be465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:54 np0005549474 podman[104248]: 2025-12-07 09:44:54.615959278 +0000 UTC m=+0.024918130 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:44:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61062bcfa925c9329252636d0976d3950b445116d21fdad06c47075d526be465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61062bcfa925c9329252636d0976d3950b445116d21fdad06c47075d526be465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61062bcfa925c9329252636d0976d3950b445116d21fdad06c47075d526be465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:54 np0005549474 podman[104248]: 2025-12-07 09:44:54.720440326 +0000 UTC m=+0.129399158 container init 6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  7 04:44:54 np0005549474 podman[104248]: 2025-12-07 09:44:54.727059 +0000 UTC m=+0.136017842 container start 6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_williams, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:44:54 np0005549474 podman[104248]: 2025-12-07 09:44:54.730760108 +0000 UTC m=+0.139718950 container attach 6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:44:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:54 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa8c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:54 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  7 04:44:54 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  7 04:44:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:55 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:55 np0005549474 lvm[104469]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:44:55 np0005549474 lvm[104469]: VG ceph_vg0 finished
Dec  7 04:44:55 np0005549474 practical_williams[104264]: {}
Dec  7 04:44:55 np0005549474 systemd[1]: libpod-6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a.scope: Deactivated successfully.
Dec  7 04:44:55 np0005549474 systemd[1]: libpod-6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a.scope: Consumed 1.041s CPU time.
Dec  7 04:44:55 np0005549474 conmon[104264]: conmon 6356097497a09e80ce6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a.scope/container/memory.events
Dec  7 04:44:55 np0005549474 podman[104248]: 2025-12-07 09:44:55.411827921 +0000 UTC m=+0.820786743 container died 6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:44:55 np0005549474 systemd[1]: var-lib-containers-storage-overlay-61062bcfa925c9329252636d0976d3950b445116d21fdad06c47075d526be465-merged.mount: Deactivated successfully.
Dec  7 04:44:55 np0005549474 podman[104248]: 2025-12-07 09:44:55.459145106 +0000 UTC m=+0.868103938 container remove 6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_williams, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:44:55 np0005549474 python3.9[104467]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:44:55 np0005549474 systemd[1]: libpod-conmon-6356097497a09e80ce6da743fd45341a94b21125801086950f681af88d47bc5a.scope: Deactivated successfully.
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:55 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  7 04:44:55 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  7 04:44:55 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  7 04:44:55 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  7 04:44:55 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  7 04:44:55 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.163841011 +0000 UTC m=+0.036154769 volume create ea0432e3051e71574e3a908c1c257279b32d496461bc3bc323260015cd9bc647
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.1741025 +0000 UTC m=+0.046416258 container create 4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_varahamihira, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 systemd[1]: Started libpod-conmon-4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47.scope.
Dec  7 04:44:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b7d4e93b3f24e6c473d54b5cd30997b77ca6868479f0508531c4e1d950f6a8/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.149062868 +0000 UTC m=+0.021376646 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.249805226 +0000 UTC m=+0.122119014 container init 4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_varahamihira, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.255643727 +0000 UTC m=+0.127957485 container start 4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_varahamihira, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 angry_varahamihira[104706]: 65534 65534
Dec  7 04:44:56 np0005549474 systemd[1]: libpod-4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47.scope: Deactivated successfully.
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.259299605 +0000 UTC m=+0.131613383 container attach 4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_varahamihira, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.259672305 +0000 UTC m=+0.131986073 container died 4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_varahamihira, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-98b7d4e93b3f24e6c473d54b5cd30997b77ca6868479f0508531c4e1d950f6a8-merged.mount: Deactivated successfully.
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.294968778 +0000 UTC m=+0.167282536 container remove 4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_varahamihira, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104681]: 2025-12-07 09:44:56.297804901 +0000 UTC m=+0.170118659 volume remove ea0432e3051e71574e3a908c1c257279b32d496461bc3bc323260015cd9bc647
Dec  7 04:44:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v19: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 1 objects/s recovering
Dec  7 04:44:56 np0005549474 systemd[1]: libpod-conmon-4caaaaaca5e801be4541b9f64284013fbc484dbcae6e9da930c8439e36bc8f47.scope: Deactivated successfully.
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.357718665 +0000 UTC m=+0.040641091 volume create 4a433c00d33ecf2884af516584abe65337a4b9ab3307160f71632bd83f29f7d9
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.36712484 +0000 UTC m=+0.050047266 container create 690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=stoic_einstein, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 systemd[1]: Started libpod-conmon-690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6.scope.
Dec  7 04:44:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:44:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c1f2a4abc95fbc0851714943b33c0f12b0da24dc30dceadbe2628998d85fc4/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:44:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:56 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.342215711 +0000 UTC m=+0.025138167 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.446495192 +0000 UTC m=+0.129417628 container init 690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=stoic_einstein, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.451998663 +0000 UTC m=+0.134921089 container start 690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=stoic_einstein, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 stoic_einstein[104806]: 65534 65534
Dec  7 04:44:56 np0005549474 systemd[1]: libpod-690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6.scope: Deactivated successfully.
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.454577799 +0000 UTC m=+0.137500255 container attach 690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=stoic_einstein, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.455134866 +0000 UTC m=+0.138057292 container died 690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=stoic_einstein, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-61c1f2a4abc95fbc0851714943b33c0f12b0da24dc30dceadbe2628998d85fc4-merged.mount: Deactivated successfully.
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.507948311 +0000 UTC m=+0.190870737 container remove 690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=stoic_einstein, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104760]: 2025-12-07 09:44:56.512171545 +0000 UTC m=+0.195093971 volume remove 4a433c00d33ecf2884af516584abe65337a4b9ab3307160f71632bd83f29f7d9
Dec  7 04:44:56 np0005549474 systemd[1]: libpod-conmon-690aec5e9389db5f4a4cd6286f196ed13945f49d3d6b43df9b8e6dfadf232ac6.scope: Deactivated successfully.
Dec  7 04:44:56 np0005549474 python3.9[104802]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:44:56 np0005549474 systemd[1]: Stopping Ceph alertmanager.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[99272]: ts=2025-12-07T09:44:56.732Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec  7 04:44:56 np0005549474 podman[104882]: 2025-12-07 09:44:56.743074972 +0000 UTC m=+0.046422259 container died 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:56 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-23b1fdc19db2ff3d8cc6e33fb714d072d9de45e71e4f4e2f60ae8a4ffe5b3929-merged.mount: Deactivated successfully.
Dec  7 04:44:56 np0005549474 podman[104882]: 2025-12-07 09:44:56.780243651 +0000 UTC m=+0.083590928 container remove 04c8c2e8a280cff3b4a05dd7c603babcc202832ee5c7ed7519a19bb4d3debe52 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:56 np0005549474 podman[104882]: 2025-12-07 09:44:56.784708491 +0000 UTC m=+0.088055798 volume remove bbecabe2ba68b9677e7fcee6c79ccd56f46ed5ff3a150457392e6e72cec26188
Dec  7 04:44:56 np0005549474 bash[104882]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0
Dec  7 04:44:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  7 04:44:56 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  7 04:44:56 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@alertmanager.compute-0.service: Deactivated successfully.
Dec  7 04:44:56 np0005549474 systemd[1]: Stopped Ceph alertmanager.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:44:56 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@alertmanager.compute-0.service: Consumed 1.011s CPU time.
Dec  7 04:44:56 np0005549474 systemd[1]: Starting Ceph alertmanager.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:44:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:57 np0005549474 podman[105032]: 2025-12-07 09:44:57.11917003 +0000 UTC m=+0.020837291 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 04:44:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:57 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac4001ac0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:57 np0005549474 python3.9[105123]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:44:57 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Dec  7 04:44:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:44:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:44:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:44:58 np0005549474 python3.9[105275]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:44:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v20: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 50 B/s, 1 objects/s recovering
Dec  7 04:44:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:58 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:58 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Dec  7 04:44:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:58 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:58 np0005549474 podman[105032]: 2025-12-07 09:44:58.89384629 +0000 UTC m=+1.795513531 volume create 7d65f878f7628a4dad8085372fa4b568a4907b70aff41cfb6789ef69a2ee4fda
Dec  7 04:44:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:44:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:44:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  7 04:44:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:44:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:44:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:44:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:44:59 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:44:59 np0005549474 python3.9[105426]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:44:59 np0005549474 network[105444]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:44:59 np0005549474 network[105445]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:44:59 np0005549474 network[105446]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:44:59 np0005549474 podman[105032]: 2025-12-07 09:44:59.448080681 +0000 UTC m=+2.349747932 container create d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:44:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  7 04:44:59 np0005549474 ceph-mon[74516]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  7 04:44:59 np0005549474 ceph-mon[74516]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  7 04:44:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:44:59] "GET /metrics HTTP/1.1" 200 48288 "" "Prometheus/2.51.0"
Dec  7 04:44:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:44:59] "GET /metrics HTTP/1.1" 200 48288 "" "Prometheus/2.51.0"
Dec  7 04:44:59 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Dec  7 04:45:00 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Dec  7 04:45:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5366d727744552e4f1932121ae4587743dd3e93b6764af580e026107caebf638/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5366d727744552e4f1932121ae4587743dd3e93b6764af580e026107caebf638/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v21: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 04:45:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  7 04:45:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  7 04:45:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:00 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac4001ac0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  7 04:45:00 np0005549474 podman[105032]: 2025-12-07 09:45:00.50862231 +0000 UTC m=+3.410289581 container init d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:00 np0005549474 podman[105032]: 2025-12-07 09:45:00.515442929 +0000 UTC m=+3.417110180 container start d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.544Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.545Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.554Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.555Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.596Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.597Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.601Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:00.601Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  7 04:45:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:00 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:00 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  7 04:45:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:01.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:01 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  7 04:45:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:01 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:01 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.c scrub starts
Dec  7 04:45:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v22: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 04:45:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:02 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:02.555Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000071416s
Dec  7 04:45:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=infra.usagestats t=2025-12-07T09:45:02.654885155Z level=info msg="Usage stats are ready to report"
Dec  7 04:45:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:02 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac40023e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:03.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:03 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:04 np0005549474 bash[105032]: d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b
Dec  7 04:45:04 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  7 04:45:04 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.c scrub ok
Dec  7 04:45:04 np0005549474 systemd[1]: Started Ceph alertmanager.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:04 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  7 04:45:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v23: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:04 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  7 04:45:04 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 94 pg[6.b( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=94) [0] r=0 lpr=94 pi=[68,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:45:04 np0005549474 python3.9[105737]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  7 04:45:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:04 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:05 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Dec  7 04:45:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:05.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:05 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  7 04:45:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  7 04:45:05 np0005549474 ceph-mgr[74811]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec  7 04:45:05 np0005549474 ceph-mgr[74811]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec  7 04:45:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:05 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac40023e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  7 04:45:05 np0005549474 python3.9[105889]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  7 04:45:05 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  7 04:45:05 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 95 pg[6.b( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=94/95 n=1 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=94) [0] r=0 lpr=94 pi=[68,94)/1 crt=46'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:45:05 np0005549474 podman[105984]: 2025-12-07 09:45:05.870320871 +0000 UTC m=+0.106090295 container create 8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700 (image=quay.io/ceph/grafana:10.4.0, name=adoring_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:05 np0005549474 podman[105984]: 2025-12-07 09:45:05.783971615 +0000 UTC m=+0.019741059 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 04:45:05 np0005549474 systemd[1]: Started libpod-conmon-8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700.scope.
Dec  7 04:45:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:05 np0005549474 podman[105984]: 2025-12-07 09:45:05.962335525 +0000 UTC m=+0.198104969 container init 8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700 (image=quay.io/ceph/grafana:10.4.0, name=adoring_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:05 np0005549474 podman[105984]: 2025-12-07 09:45:05.96900388 +0000 UTC m=+0.204773304 container start 8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700 (image=quay.io/ceph/grafana:10.4.0, name=adoring_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:05 np0005549474 adoring_clarke[106000]: 472 0
Dec  7 04:45:05 np0005549474 systemd[1]: libpod-8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700.scope: Deactivated successfully.
Dec  7 04:45:06 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.e scrub starts
Dec  7 04:45:06 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.e scrub ok
Dec  7 04:45:06 np0005549474 podman[105984]: 2025-12-07 09:45:06.092913406 +0000 UTC m=+0.328682830 container attach 8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700 (image=quay.io/ceph/grafana:10.4.0, name=adoring_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 podman[105984]: 2025-12-07 09:45:06.09374254 +0000 UTC m=+0.329511964 container died 8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700 (image=quay.io/ceph/grafana:10.4.0, name=adoring_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f255fee32ff845a37072e60e2883b177cc71f6e995e88990188c3be3fdcb6a19-merged.mount: Deactivated successfully.
Dec  7 04:45:06 np0005549474 podman[105984]: 2025-12-07 09:45:06.133560376 +0000 UTC m=+0.369329800 container remove 8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700 (image=quay.io/ceph/grafana:10.4.0, name=adoring_clarke, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 systemd[1]: libpod-conmon-8bbb8906a1f3c8f39e62c5e9d6e8b86607d4605e6bc9954629aa3962ade4a700.scope: Deactivated successfully.
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.173860956 +0000 UTC m=+0.021813340 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 04:45:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v26: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  7 04:45:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:06 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.664693231 +0000 UTC m=+0.512645595 container create 4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78 (image=quay.io/ceph/grafana:10.4.0, name=inspiring_blackburn, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  7 04:45:06 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  7 04:45:06 np0005549474 systemd[1]: Started libpod-conmon-4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78.scope.
Dec  7 04:45:06 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:06 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.844560815 +0000 UTC m=+0.692513179 container init 4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78 (image=quay.io/ceph/grafana:10.4.0, name=inspiring_blackburn, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.850798147 +0000 UTC m=+0.698750521 container start 4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78 (image=quay.io/ceph/grafana:10.4.0, name=inspiring_blackburn, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 inspiring_blackburn[106159]: 472 0
Dec  7 04:45:06 np0005549474 systemd[1]: libpod-4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78.scope: Deactivated successfully.
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.855107514 +0000 UTC m=+0.703059878 container attach 4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78 (image=quay.io/ceph/grafana:10.4.0, name=inspiring_blackburn, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.856134553 +0000 UTC m=+0.704086917 container died 4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78 (image=quay.io/ceph/grafana:10.4.0, name=inspiring_blackburn, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-33df1da45e78b1e57a64652596f88e293f23fe8ad2d8acd256e1476f1dd89718-merged.mount: Deactivated successfully.
Dec  7 04:45:06 np0005549474 python3.9[106156]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:45:06 np0005549474 podman[106017]: 2025-12-07 09:45:06.894517567 +0000 UTC m=+0.742469931 container remove 4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78 (image=quay.io/ceph/grafana:10.4.0, name=inspiring_blackburn, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:06 np0005549474 systemd[1]: libpod-conmon-4780261f64c71593e03bef3f49bdf52c580fcf61db42357459f4bdaa69224f78.scope: Deactivated successfully.
Dec  7 04:45:06 np0005549474 systemd[1]: Stopping Ceph grafana.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:45:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:07.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:07 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  7 04:45:07 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  7 04:45:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=server t=2025-12-07T09:45:07.15974844Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec  7 04:45:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=tracing t=2025-12-07T09:45:07.159892324Z level=info msg="Closing tracing"
Dec  7 04:45:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=ticker t=2025-12-07T09:45:07.160037298Z level=info msg=stopped last_tick=2025-12-07T09:45:00Z
Dec  7 04:45:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=grafana-apiserver t=2025-12-07T09:45:07.16011013Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec  7 04:45:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[99796]: logger=sqlstore.transactions t=2025-12-07T09:45:07.171191455Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 04:45:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:07 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:07 np0005549474 podman[106219]: 2025-12-07 09:45:07.70372082 +0000 UTC m=+0.573890277 container died 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  7 04:45:07 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  7 04:45:07 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  7 04:45:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay-baa0cbd381729b8617a0c6fe30ab1b1fb652594b53eb847a449211e828eef2f9-merged.mount: Deactivated successfully.
Dec  7 04:45:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  7 04:45:07 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  7 04:45:07 np0005549474 podman[106219]: 2025-12-07 09:45:07.774519802 +0000 UTC m=+0.644689259 container remove 43800770719fe020c26f12af78188fe644bc6c260647b0a99f09085a97e3e860 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:07 np0005549474 bash[106219]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0
Dec  7 04:45:07 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@grafana.compute-0.service: Deactivated successfully.
Dec  7 04:45:07 np0005549474 systemd[1]: Stopped Ceph grafana.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:45:07 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@grafana.compute-0.service: Consumed 4.063s CPU time.
Dec  7 04:45:07 np0005549474 systemd[1]: Starting Ceph grafana.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:45:08 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Dec  7 04:45:08 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Dec  7 04:45:08 np0005549474 python3.9[106418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:45:08 np0005549474 podman[106474]: 2025-12-07 09:45:08.184381308 +0000 UTC m=+0.059228685 container create d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e111caaa1f06461951aaa5283f7bc4e3d822bc92d707640b9b7fa7b99abdf59/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e111caaa1f06461951aaa5283f7bc4e3d822bc92d707640b9b7fa7b99abdf59/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e111caaa1f06461951aaa5283f7bc4e3d822bc92d707640b9b7fa7b99abdf59/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e111caaa1f06461951aaa5283f7bc4e3d822bc92d707640b9b7fa7b99abdf59/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e111caaa1f06461951aaa5283f7bc4e3d822bc92d707640b9b7fa7b99abdf59/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:08 np0005549474 podman[106474]: 2025-12-07 09:45:08.149546278 +0000 UTC m=+0.024393685 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 04:45:08 np0005549474 podman[106474]: 2025-12-07 09:45:08.26441405 +0000 UTC m=+0.139261527 container init d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:08 np0005549474 podman[106474]: 2025-12-07 09:45:08.270420826 +0000 UTC m=+0.145268233 container start d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:08 np0005549474 bash[106474]: d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342
Dec  7 04:45:08 np0005549474 systemd[1]: Started Ceph grafana.compute-0 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v29: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  7 04:45:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:45:08] ENGINE Bus STOPPING
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO root] Restarting engine...
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:45:08] ENGINE Bus STOPPING
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:08 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac40023e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.492853916Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-07T09:45:08Z
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493119374Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493128004Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493132294Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493728852Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493735802Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493741662Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493747092Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.493752212Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494001679Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.49400934Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.49401511Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.4940206Z level=info msg=Target target=[all]
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494221256Z level=info msg="Path Home" path=/usr/share/grafana
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494228156Z level=info msg="Path Data" path=/var/lib/grafana
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494236126Z level=info msg="Path Logs" path=/var/log/grafana
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494241076Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494515834Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=settings t=2025-12-07T09:45:08.494532865Z level=info msg="App mode production"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=sqlstore t=2025-12-07T09:45:08.495027349Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=sqlstore t=2025-12-07T09:45:08.49505075Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=migrator t=2025-12-07T09:45:08.495934816Z level=info msg="Starting DB migrations"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=migrator t=2025-12-07T09:45:08.515928771Z level=info msg="migrations completed" performed=0 skipped=547 duration=924.626µs
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:45:08] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:45:08] ENGINE Bus STOPPED
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=sqlstore t=2025-12-07T09:45:08.518433535Z level=info msg="Created default organization"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:45:08] ENGINE Bus STARTING
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:45:08] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:45:08] ENGINE Bus STOPPED
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=secrets t=2025-12-07T09:45:08.519250848Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:45:08] ENGINE Bus STARTING
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugin.store t=2025-12-07T09:45:08.544628471Z level=info msg="Loading plugins..."
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:45:08] ENGINE Serving on http://:::9283
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: [07/Dec/2025:09:45:08] ENGINE Bus STARTED
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:45:08] ENGINE Serving on http://:::9283
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:45:08] ENGINE Bus STARTED
Dec  7 04:45:08 np0005549474 ceph-mgr[74811]: [prometheus INFO root] Engine started.
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=local.finder t=2025-12-07T09:45:08.63137569Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugin.store t=2025-12-07T09:45:08.631443712Z level=info msg="Plugins loaded" count=55 duration=86.814811ms
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=query_data t=2025-12-07T09:45:08.636127229Z level=info msg="Query Service initialization"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=live.push_http t=2025-12-07T09:45:08.639630482Z level=info msg="Live Push Gateway initialization"
Dec  7 04:45:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:08 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:09 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.b scrub starts
Dec  7 04:45:09 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.b scrub ok
Dec  7 04:45:09 np0005549474 python3.9[106653]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:45:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:09 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ngalert.migration t=2025-12-07T09:45:09.346686765Z level=info msg=Starting
Dec  7 04:45:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  7 04:45:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:09] "GET /metrics HTTP/1.1" 200 48288 "" "Prometheus/2.51.0"
Dec  7 04:45:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:09] "GET /metrics HTTP/1.1" 200 48288 "" "Prometheus/2.51.0"
Dec  7 04:45:10 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  7 04:45:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v30: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Dec  7 04:45:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:10 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:45:10.558Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003139784s
Dec  7 04:45:10 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  7 04:45:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:10 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac4003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  7 04:45:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  7 04:45:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  7 04:45:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ngalert.state.manager t=2025-12-07T09:45:10.916755517Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  7 04:45:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=infra.usagestats.collector t=2025-12-07T09:45:10.920435694Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  7 04:45:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=provisioning.datasources t=2025-12-07T09:45:10.925444101Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec  7 04:45:11 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.8 deep-scrub starts
Dec  7 04:45:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:11.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:11 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:11 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.8 deep-scrub ok
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=provisioning.alerting t=2025-12-07T09:45:11.648334758Z level=info msg="starting to provision alerting"
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=provisioning.alerting t=2025-12-07T09:45:11.648360269Z level=info msg="finished to provision alerting"
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ngalert.state.manager t=2025-12-07T09:45:11.649136831Z level=info msg="Warming state cache for startup"
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ngalert.multiorg.alertmanager t=2025-12-07T09:45:11.651391257Z level=info msg="Starting MultiOrg Alertmanager"
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafanaStorageLogger t=2025-12-07T09:45:11.651429048Z level=info msg="Storage starting"
Dec  7 04:45:11 np0005549474 podman[106727]: 2025-12-07 09:45:11.655774616 +0000 UTC m=+2.468651252 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:45:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  7 04:45:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  7 04:45:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  7 04:45:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=http.server t=2025-12-07T09:45:11.671046302Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=http.server t=2025-12-07T09:45:11.672509336Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ngalert.state.manager t=2025-12-07T09:45:11.685753942Z level=info msg="State cache has been initialized" states=0 duration=36.615111ms
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ngalert.scheduler t=2025-12-07T09:45:11.685797474Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=ticker t=2025-12-07T09:45:11.685883206Z level=info msg=starting first_tick=2025-12-07T09:45:20Z
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugins.update.checker t=2025-12-07T09:45:11.731039129Z level=info msg="Update check succeeded" duration=81.741074ms
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana.update.checker t=2025-12-07T09:45:11.736061605Z level=info msg="Update check succeeded" duration=87.523271ms
Dec  7 04:45:11 np0005549474 podman[106727]: 2025-12-07 09:45:11.758537593 +0000 UTC m=+2.571414209 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=provisioning.dashboard t=2025-12-07T09:45:11.873626871Z level=info msg="starting to provision dashboards"
Dec  7 04:45:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=provisioning.dashboard t=2025-12-07T09:45:11.892542055Z level=info msg="finished to provision dashboards"
Dec  7 04:45:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  7 04:45:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  7 04:45:12 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Dec  7 04:45:12 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Dec  7 04:45:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana-apiserver t=2025-12-07T09:45:12.059491641Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  7 04:45:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana-apiserver t=2025-12-07T09:45:12.060599303Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v32: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Dec  7 04:45:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 98 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=98 pruub=11.690052032s) [1] r=-1 lpr=98 pi=[78,98)/1 crt=56'1095 mlcod 0'0 active pruub 280.900695801s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 98 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=98 pruub=11.689977646s) [1] r=-1 lpr=98 pi=[78,98)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 280.900695801s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:45:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 98 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=98 pruub=11.689295769s) [1] r=-1 lpr=98 pi=[78,98)/1 crt=56'1095 mlcod 0'0 active pruub 280.900726318s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 98 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=98 pruub=11.689095497s) [1] r=-1 lpr=98 pi=[78,98)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 280.900726318s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:45:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:45:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:45:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:12 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:45:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:45:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:12 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:13 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.a scrub starts
Dec  7 04:45:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  7 04:45:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:13 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.a scrub ok
Dec  7 04:45:13 np0005549474 podman[106868]: 2025-12-07 09:45:13.173713391 +0000 UTC m=+0.890903676 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:13 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffac4003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:13 np0005549474 podman[106868]: 2025-12-07 09:45:13.29805206 +0000 UTC m=+1.015242395 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  7 04:45:13 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  7 04:45:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 99 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=0 lpr=99 pi=[78,99)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 99 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=6 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=0 lpr=99 pi=[78,99)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:45:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 99 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=0 lpr=99 pi=[78,99)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 99 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=78/79 n=5 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=0 lpr=99 pi=[78,99)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:45:13 np0005549474 podman[106971]: 2025-12-07 09:45:13.891245801 +0000 UTC m=+0.067672382 container exec 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:45:14 np0005549474 podman[106993]: 2025-12-07 09:45:14.001478767 +0000 UTC m=+0.088812780 container exec_died 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:45:14 np0005549474 podman[106971]: 2025-12-07 09:45:14.029322422 +0000 UTC m=+0.205748993 container exec_died 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:45:14 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Dec  7 04:45:14 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Dec  7 04:45:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v34: 337 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 unknown, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/205 objects misplaced (1.463%)
Dec  7 04:45:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:14 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffaa4003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  7 04:45:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:14 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa94003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:15 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Dec  7 04:45:15 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Dec  7 04:45:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:15 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:16 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  7 04:45:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v35: 337 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 unknown, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/205 objects misplaced (1.463%)
Dec  7 04:45:16 np0005549474 kernel: ganesha.nfsd[99322]: segfault at 50 ip 00007ffb7732632e sp 00007ffb457f9210 error 4 in libntirpc.so.5.8[7ffb7730b000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  7 04:45:16 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:45:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[97727]: 07/12/2025 09:45:16 : epoch 69354c44 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffa98004140 fd 47 proxy ignored for local
Dec  7 04:45:16 np0005549474 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec  7 04:45:16 np0005549474 systemd[1]: Started Process Core Dump (PID 107090/UID 0).
Dec  7 04:45:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:17 np0005549474 podman[107046]: 2025-12-07 09:45:17.14440511 +0000 UTC m=+2.937560574 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:45:17 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  7 04:45:17 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  7 04:45:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v36: 337 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 unknown, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/205 objects misplaced (1.463%)
Dec  7 04:45:18 np0005549474 systemd-coredump[107091]: Process 97731 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007ffb7732632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:45:18 np0005549474 systemd[1]: systemd-coredump@0-107090-0.service: Deactivated successfully.
Dec  7 04:45:18 np0005549474 systemd[1]: systemd-coredump@0-107090-0.service: Consumed 1.250s CPU time.
Dec  7 04:45:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:19.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:19 np0005549474 podman[107102]: 2025-12-07 09:45:19.264695856 +0000 UTC m=+2.093437210 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:45:19 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  7 04:45:19 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec  7 04:45:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:19] "GET /metrics HTTP/1.1" 200 48285 "" "Prometheus/2.51.0"
Dec  7 04:45:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:19] "GET /metrics HTTP/1.1" 200 48285 "" "Prometheus/2.51.0"
Dec  7 04:45:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v37: 337 pgs: 2 active+clean+scrubbing, 2 active+remapped, 2 unknown, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:20 np0005549474 podman[107046]: 2025-12-07 09:45:20.649842735 +0000 UTC m=+6.442998179 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:45:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:21.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:21 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec  7 04:45:21 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec  7 04:45:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v38: 337 pgs: 2 active+clean+scrubbing, 2 active+remapped, 2 unknown, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  7 04:45:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:23.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:23 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  7 04:45:23 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec  7 04:45:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:23.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:23 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  7 04:45:23 np0005549474 podman[107120]: 2025-12-07 09:45:23.425442918 +0000 UTC m=+4.874371340 container died 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:45:23 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  7 04:45:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  7 04:45:23 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 100 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=99/100 n=6 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[78,99)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:45:23 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 100 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=99/100 n=5 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[78,99)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:45:24 np0005549474 systemd[1]: var-lib-containers-storage-overlay-649542f307cced57b8fe0950304d5c93960602a94da0e7f2895f956ca8bfb913-merged.mount: Deactivated successfully.
Dec  7 04:45:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v40: 337 pgs: 4 active+clean+scrubbing, 2 peering, 2 unknown, 329 active+clean; 454 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094524 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:45:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:25.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v41: 337 pgs: 1 active+recovery_wait+remapped, 1 active+recovering+remapped, 2 active+clean+scrubbing, 2 peering, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13/209 objects misplaced (6.220%)
Dec  7 04:45:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:27 np0005549474 podman[107120]: 2025-12-07 09:45:27.044102347 +0000 UTC m=+8.493030809 container remove 3032e77de59ae8dbc659eb5f3388808ef7e05cac9360a3023473657f3f714972 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:45:27 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:45:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:27.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:27 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:45:27 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.781s CPU time.
Dec  7 04:45:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  7 04:45:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  7 04:45:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:45:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:45:27 np0005549474 podman[107209]: 2025-12-07 09:45:27.608598628 +0000 UTC m=+0.452824283 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, version=2.2.4, distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  7 04:45:27 np0005549474 podman[107209]: 2025-12-07 09:45:27.630619052 +0000 UTC m=+0.474844687 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Dec  7 04:45:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v43: 337 pgs: 1 active+recovery_wait+remapped, 1 active+recovering+remapped, 2 active+clean+scrubbing, 2 peering, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13/209 objects misplaced (6.220%)
Dec  7 04:45:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:29.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:29] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Dec  7 04:45:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:29] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Dec  7 04:45:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v44: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/208 objects misplaced (1.442%)
Dec  7 04:45:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:45:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:31.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:31.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:31 np0005549474 podman[107297]: 2025-12-07 09:45:31.360256237 +0000 UTC m=+0.994824916 container exec d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  7 04:45:31 np0005549474 podman[107297]: 2025-12-07 09:45:31.531539551 +0000 UTC m=+1.166108160 container exec_died d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:31 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 102 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=99/100 n=6 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=102 pruub=8.212248802s) [1] async=[1] r=-1 lpr=102 pi=[78,102)/1 crt=56'1095 mlcod 56'1095 active pruub 296.693908691s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:31 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 102 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=99/100 n=6 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=102 pruub=8.212073326s) [1] r=-1 lpr=102 pi=[78,102)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 296.693908691s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:45:31 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  7 04:45:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v46: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/208 objects misplaced (1.442%); 0 B/s, 2 objects/s recovering
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:32 np0005549474 podman[107380]: 2025-12-07 09:45:32.464440364 +0000 UTC m=+0.690986374 container exec d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  7 04:45:32 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  7 04:45:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 103 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=99/100 n=5 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=103 pruub=15.233267784s) [1] async=[1] r=-1 lpr=103 pi=[78,103)/1 crt=56'1095 mlcod 56'1095 active pruub 304.694152832s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 103 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=99/100 n=5 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=103 pruub=15.233215332s) [1] r=-1 lpr=103 pi=[78,103)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 304.694152832s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:45:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 103 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=77/78 n=1 ec=53/21 lis/c=77/77 les/c/f=78/78/0 sis=103 pruub=14.432253838s) [1] r=-1 lpr=103 pi=[77,103)/1 crt=46'39 mlcod 46'39 active pruub 303.894836426s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:32 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 103 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=77/78 n=1 ec=53/21 lis/c=77/77 les/c/f=78/78/0 sis=103 pruub=14.432070732s) [1] r=-1 lpr=103 pi=[77,103)/1 crt=46'39 mlcod 0'0 unknown NOTIFY pruub 303.894836426s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:45:32 np0005549474 podman[107380]: 2025-12-07 09:45:32.644784452 +0000 UTC m=+0.871330362 container exec_died d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:45:33 np0005549474 podman[107495]: 2025-12-07 09:45:33.026986267 +0000 UTC m=+0.047616074 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:33.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:33 np0005549474 podman[107495]: 2025-12-07 09:45:33.055872403 +0000 UTC m=+0.076502190 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:45:33 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  7 04:45:33 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  7 04:45:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:33.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  7 04:45:33 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.84130064 +0000 UTC m=+0.035732537 container create 6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cori, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:45:33 np0005549474 systemd[1]: Started libpod-conmon-6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a.scope.
Dec  7 04:45:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.917419418 +0000 UTC m=+0.111851325 container init 6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.826985101 +0000 UTC m=+0.021417018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.923721613 +0000 UTC m=+0.118153520 container start 6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:45:33 np0005549474 suspicious_cori[107649]: 167 167
Dec  7 04:45:33 np0005549474 systemd[1]: libpod-6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a.scope: Deactivated successfully.
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.92911116 +0000 UTC m=+0.123543077 container attach 6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cori, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.9294416 +0000 UTC m=+0.123873507 container died 6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cori, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 04:45:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-32bcb96bd89ee24f58fb719c16e921fdf6018e27eae4db2dc4f3a42903164c4f-merged.mount: Deactivated successfully.
Dec  7 04:45:33 np0005549474 podman[107631]: 2025-12-07 09:45:33.969507283 +0000 UTC m=+0.163939170 container remove 6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:45:33 np0005549474 systemd[1]: libpod-conmon-6e095b6b0995f62f9945f7e591ec2e94c7f067b313acc23e31850b5bbd5aa66a.scope: Deactivated successfully.
Dec  7 04:45:34 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec  7 04:45:34 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.109299344 +0000 UTC m=+0.035736027 container create ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:45:34 np0005549474 systemd[1]: Started libpod-conmon-ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae.scope.
Dec  7 04:45:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1536d6f57f1e4f20f8621874e7a03e7ccb39f04e50f16ce6aac8519ac8ea8c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1536d6f57f1e4f20f8621874e7a03e7ccb39f04e50f16ce6aac8519ac8ea8c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1536d6f57f1e4f20f8621874e7a03e7ccb39f04e50f16ce6aac8519ac8ea8c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1536d6f57f1e4f20f8621874e7a03e7ccb39f04e50f16ce6aac8519ac8ea8c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1536d6f57f1e4f20f8621874e7a03e7ccb39f04e50f16ce6aac8519ac8ea8c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.093548513 +0000 UTC m=+0.019985216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.25198196 +0000 UTC m=+0.178418673 container init ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.2591854 +0000 UTC m=+0.185622083 container start ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.262702564 +0000 UTC m=+0.189139277 container attach ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 04:45:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v49: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:34 np0005549474 goofy_payne[107692]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:45:34 np0005549474 goofy_payne[107692]: --> All data devices are unavailable
Dec  7 04:45:34 np0005549474 systemd[1]: libpod-ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae.scope: Deactivated successfully.
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.6316175 +0000 UTC m=+0.558054193 container died ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:45:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b1536d6f57f1e4f20f8621874e7a03e7ccb39f04e50f16ce6aac8519ac8ea8c0-merged.mount: Deactivated successfully.
Dec  7 04:45:34 np0005549474 podman[107676]: 2025-12-07 09:45:34.673418644 +0000 UTC m=+0.599855327 container remove ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:45:34 np0005549474 systemd[1]: libpod-conmon-ac3b7f894f632d5d005895d6f7962d0330ddeaba05b991389ff529cb9b5003ae.scope: Deactivated successfully.
Dec  7 04:45:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:35 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Dec  7 04:45:35 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Dec  7 04:45:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.302564048 +0000 UTC m=+0.047673057 container create fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hofstadter, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:45:35 np0005549474 systemd[1]: Started libpod-conmon-fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc.scope.
Dec  7 04:45:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.27806739 +0000 UTC m=+0.023176379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.38466673 +0000 UTC m=+0.129775719 container init fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.396027073 +0000 UTC m=+0.141136032 container start fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hofstadter, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.399142854 +0000 UTC m=+0.144251853 container attach fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hofstadter, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:45:35 np0005549474 sharp_hofstadter[107827]: 167 167
Dec  7 04:45:35 np0005549474 systemd[1]: libpod-fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc.scope: Deactivated successfully.
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.401640067 +0000 UTC m=+0.146749036 container died fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:45:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-621083f03cd3d77a9e890fffab8aaa445aedd109655445f43177475f3aebb17d-merged.mount: Deactivated successfully.
Dec  7 04:45:35 np0005549474 podman[107810]: 2025-12-07 09:45:35.453547637 +0000 UTC m=+0.198656616 container remove fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:45:35 np0005549474 systemd[1]: libpod-conmon-fdee09bc134962548ae48c2a11decc16b1c1cbdbd2c0d2df8d95820b8ad5b7cc.scope: Deactivated successfully.
Dec  7 04:45:35 np0005549474 podman[107851]: 2025-12-07 09:45:35.603456673 +0000 UTC m=+0.040879736 container create d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec  7 04:45:35 np0005549474 systemd[1]: Started libpod-conmon-d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1.scope.
Dec  7 04:45:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a08681f290c393dc544bac567c836711ae9e9568f4e1a0b39e8825c57c094b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a08681f290c393dc544bac567c836711ae9e9568f4e1a0b39e8825c57c094b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a08681f290c393dc544bac567c836711ae9e9568f4e1a0b39e8825c57c094b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a08681f290c393dc544bac567c836711ae9e9568f4e1a0b39e8825c57c094b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:35 np0005549474 podman[107851]: 2025-12-07 09:45:35.584245691 +0000 UTC m=+0.021668784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:35 np0005549474 podman[107851]: 2025-12-07 09:45:35.681108936 +0000 UTC m=+0.118532029 container init d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:45:35 np0005549474 podman[107851]: 2025-12-07 09:45:35.690213933 +0000 UTC m=+0.127636986 container start d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:45:35 np0005549474 podman[107851]: 2025-12-07 09:45:35.693388565 +0000 UTC m=+0.130811658 container attach d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 04:45:35 np0005549474 jovial_napier[107868]: {
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:    "0": [
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:        {
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "devices": [
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "/dev/loop3"
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            ],
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "lv_name": "ceph_lv0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "lv_size": "21470642176",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "name": "ceph_lv0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "tags": {
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.cluster_name": "ceph",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.crush_device_class": "",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.encrypted": "0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.osd_id": "0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.type": "block",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.vdo": "0",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:                "ceph.with_tpm": "0"
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            },
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "type": "block",
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:            "vg_name": "ceph_vg0"
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:        }
Dec  7 04:45:35 np0005549474 jovial_napier[107868]:    ]
Dec  7 04:45:35 np0005549474 jovial_napier[107868]: }
Dec  7 04:45:35 np0005549474 systemd[1]: libpod-d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1.scope: Deactivated successfully.
Dec  7 04:45:35 np0005549474 podman[107851]: 2025-12-07 09:45:35.996074244 +0000 UTC m=+0.433497307 container died d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:45:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-24a08681f290c393dc544bac567c836711ae9e9568f4e1a0b39e8825c57c094b-merged.mount: Deactivated successfully.
Dec  7 04:45:36 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  7 04:45:36 np0005549474 podman[107851]: 2025-12-07 09:45:36.039397303 +0000 UTC m=+0.476820366 container remove d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:45:36 np0005549474 systemd[1]: libpod-conmon-d731982b5cc38d80b7752932c5e65ba70ce66511304e8df1a8963f2664333fd1.scope: Deactivated successfully.
Dec  7 04:45:36 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  7 04:45:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:45:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v50: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.596334323 +0000 UTC m=+0.044637568 container create 71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:45:36 np0005549474 systemd[1]: Started libpod-conmon-71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669.scope.
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.57507573 +0000 UTC m=+0.023379025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.695101363 +0000 UTC m=+0.143404618 container init 71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.702824779 +0000 UTC m=+0.151128024 container start 71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:45:36 np0005549474 suspicious_ellis[108019]: 167 167
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.706466516 +0000 UTC m=+0.154769771 container attach 71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:45:36 np0005549474 systemd[1]: libpod-71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669.scope: Deactivated successfully.
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.709029301 +0000 UTC m=+0.157332576 container died 71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 04:45:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-16853fabd396adf236e464279d98cb1f0f3430afa7e1eccdbdc9ddcd4f9c2aa2-merged.mount: Deactivated successfully.
Dec  7 04:45:36 np0005549474 podman[108003]: 2025-12-07 09:45:36.749426603 +0000 UTC m=+0.197729848 container remove 71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_ellis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:45:36 np0005549474 systemd[1]: libpod-conmon-71e46c7ca280634e22fec6c4e048a2230556f996d34a798e4a1116ab7856f669.scope: Deactivated successfully.
Dec  7 04:45:36 np0005549474 podman[108044]: 2025-12-07 09:45:36.905430709 +0000 UTC m=+0.056496305 container create 974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:45:36 np0005549474 systemd[1]: Started libpod-conmon-974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f.scope.
Dec  7 04:45:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:45:36 np0005549474 podman[108044]: 2025-12-07 09:45:36.883135656 +0000 UTC m=+0.034201272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2334c515bbc2d55ad8be3f417b33ffe1dac06133275d003dea080736a92215fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2334c515bbc2d55ad8be3f417b33ffe1dac06133275d003dea080736a92215fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2334c515bbc2d55ad8be3f417b33ffe1dac06133275d003dea080736a92215fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2334c515bbc2d55ad8be3f417b33ffe1dac06133275d003dea080736a92215fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:36 np0005549474 podman[108044]: 2025-12-07 09:45:36.993500806 +0000 UTC m=+0.144566432 container init 974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:45:37 np0005549474 podman[108044]: 2025-12-07 09:45:37.010803133 +0000 UTC m=+0.161868729 container start 974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:45:37 np0005549474 podman[108044]: 2025-12-07 09:45:37.050252878 +0000 UTC m=+0.201318564 container attach 974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:45:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:37 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec  7 04:45:37 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec  7 04:45:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:37 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 1.
Dec  7 04:45:37 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:45:37 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.781s CPU time.
Dec  7 04:45:37 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:45:37 np0005549474 lvm[108192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:45:37 np0005549474 lvm[108192]: VG ceph_vg0 finished
Dec  7 04:45:37 np0005549474 podman[108175]: 2025-12-07 09:45:37.716524847 +0000 UTC m=+0.098192405 container create a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:45:37 np0005549474 podman[108175]: 2025-12-07 09:45:37.637506034 +0000 UTC m=+0.019173622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:45:37 np0005549474 sad_diffie[108060]: {}
Dec  7 04:45:37 np0005549474 systemd[1]: libpod-974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f.scope: Deactivated successfully.
Dec  7 04:45:37 np0005549474 systemd[1]: libpod-974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f.scope: Consumed 1.165s CPU time.
Dec  7 04:45:37 np0005549474 podman[108044]: 2025-12-07 09:45:37.865437106 +0000 UTC m=+1.016502722 container died 974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:45:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63653c0a3e7cd94ebf74554bf7d979614d69d71f2ac5e7280f38fada6cca255/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63653c0a3e7cd94ebf74554bf7d979614d69d71f2ac5e7280f38fada6cca255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63653c0a3e7cd94ebf74554bf7d979614d69d71f2ac5e7280f38fada6cca255/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d63653c0a3e7cd94ebf74554bf7d979614d69d71f2ac5e7280f38fada6cca255/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:45:38 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec  7 04:45:38 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec  7 04:45:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2334c515bbc2d55ad8be3f417b33ffe1dac06133275d003dea080736a92215fe-merged.mount: Deactivated successfully.
Dec  7 04:45:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v51: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:38 np0005549474 podman[108044]: 2025-12-07 09:45:38.431266736 +0000 UTC m=+1.582332362 container remove 974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:45:38 np0005549474 podman[108175]: 2025-12-07 09:45:38.441185986 +0000 UTC m=+0.822853554 container init a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 04:45:38 np0005549474 podman[108175]: 2025-12-07 09:45:38.447110319 +0000 UTC m=+0.828777877 container start a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:45:38 np0005549474 bash[108175]: a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:45:38 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:45:38 np0005549474 systemd[1]: libpod-conmon-974d0e4108a67f98770c4a2857ff509e345922b7f5d1a32376a59ebed33b782f.scope: Deactivated successfully.
Dec  7 04:45:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:45:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:45:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:45:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:45:39 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  7 04:45:39 np0005549474 ceph-osd[83033]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  7 04:45:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:39] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Dec  7 04:45:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:39] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Dec  7 04:45:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v52: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  7 04:45:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  7 04:45:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  7 04:45:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  7 04:45:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  7 04:45:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:41.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  7 04:45:41 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  7 04:45:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:42 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:45:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:42 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:45:42
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.control', '.nfs']
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec  7 04:45:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:45:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:45:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:45:42 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 105 pg[6.f( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=105) [0] r=0 lpr=105 pi=[68,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:45:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:45:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  7 04:45:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  7 04:45:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  7 04:45:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  7 04:45:43 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  7 04:45:43 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 106 pg[6.f( v 46'39 lc 45'1 (0'0,46'39] local-lis/les=105/106 n=3 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=105) [0] r=0 lpr=105 pi=[68,105)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:45:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:43.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  7 04:45:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v56: 337 pgs: 1 unknown, 2 remapped+peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  7 04:45:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  7 04:45:44 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  7 04:45:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:45.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:45:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:45:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  7 04:45:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  7 04:45:46 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  7 04:45:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v59: 337 pgs: 1 unknown, 2 remapped+peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:46 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bb0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:46 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:45:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  7 04:45:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:47.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  7 04:45:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  7 04:45:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  7 04:45:47 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  7 04:45:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:47 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b94000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 1 unknown, 2 remapped+peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094548 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:45:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:48 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  7 04:45:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  7 04:45:48 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  7 04:45:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:48 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac001cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:49.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:49 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:49] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:45:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:49] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:45:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:50 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:50 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:51.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:45:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:45:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:51 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:45:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:52 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=infra.usagestats t=2025-12-07T09:45:52.659334071Z level=info msg="Usage stats are ready to report"
Dec  7 04:45:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:52 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:53.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:53.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:53 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec  7 04:45:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  7 04:45:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:54 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:54 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  7 04:45:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  7 04:45:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  7 04:45:55 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  7 04:45:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  7 04:45:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:55 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:56 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  7 04:45:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec  7 04:45:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  7 04:45:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:56 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:56 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:57 np0005549474 ceph-mgr[74811]: [dashboard INFO request] [192.168.122.100:44040] [POST] [200] [0.161s] [4.0B] [9a5229c9-21cd-4aee-80be-6e547fbc0912] /api/prometheus_receiver
Dec  7 04:45:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:57 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  7 04:45:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:45:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:45:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  7 04:45:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  7 04:45:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  7 04:45:58 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  7 04:45:58 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 112 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=4 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=112 pruub=8.065917015s) [2] r=-1 lpr=112 pi=[66,112)/1 crt=56'1095 mlcod 0'0 active pruub 322.987609863s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:58 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 112 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=4 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=112 pruub=8.065878868s) [2] r=-1 lpr=112 pi=[66,112)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 322.987609863s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:45:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v69: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:45:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:58 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:58 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  7 04:45:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  7 04:45:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  7 04:45:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:59.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:59 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  7 04:45:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 113 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=4 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=113) [2]/[0] r=0 lpr=113 pi=[66,113)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:45:59 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 113 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=66/67 n=4 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=113) [2]/[0] r=0 lpr=113 pi=[66,113)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 04:45:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:45:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:45:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:45:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:45:59 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:45:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:59] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Dec  7 04:45:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:45:59] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Dec  7 04:46:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:46:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:00 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:00 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  7 04:46:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  7 04:46:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  7 04:46:01 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 114 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=113/114 n=4 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=113) [2]/[0] async=[2] r=0 lpr=113 pi=[66,113)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:46:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:01.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:01 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  7 04:46:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:46:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  7 04:46:02 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  7 04:46:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 115 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=113/114 n=4 ec=57/42 lis/c=113/66 les/c/f=114/67/0 sis=115 pruub=14.635011673s) [2] async=[2] r=-1 lpr=115 pi=[66,115)/1 crt=56'1095 mlcod 56'1095 active pruub 333.852661133s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:02 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 115 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=113/114 n=4 ec=57/42 lis/c=113/66 les/c/f=114/67/0 sis=115 pruub=14.634819984s) [2] r=-1 lpr=115 pi=[66,115)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 333.852661133s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 04:46:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:02 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:46:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:02 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:03.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:03.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:03 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  7 04:46:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  7 04:46:03 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  7 04:46:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 584 B/s rd, 0 op/s; 20 B/s, 0 objects/s recovering
Dec  7 04:46:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:04 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:04 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:05 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec  7 04:46:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:06 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:06 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:06.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:46:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:06.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:46:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:07.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:07 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:46:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Dec  7 04:46:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec  7 04:46:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  7 04:46:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:08 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:08 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  7 04:46:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  7 04:46:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  7 04:46:09 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  7 04:46:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  7 04:46:09 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 117 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=117) [0] r=0 lpr=117 pi=[64,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:09.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:09.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:09 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:09] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Dec  7 04:46:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:09] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Dec  7 04:46:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  7 04:46:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  7 04:46:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  7 04:46:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 118 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=118) [0]/[2] r=-1 lpr=118 pi=[64,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:10 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  7 04:46:10 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 118 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=118) [0]/[2] r=-1 lpr=118 pi=[64,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:46:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v81: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 466 B/s rd, 0 op/s; 16 B/s, 0 objects/s recovering
Dec  7 04:46:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:10 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:10 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  7 04:46:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:46:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:11.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:46:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  7 04:46:11 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  7 04:46:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:11 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  7 04:46:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  7 04:46:12 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  7 04:46:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 120 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=118/64 les/c/f=119/65/0 sis=120) [0] r=0 lpr=120 pi=[64,120)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:12 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 120 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=118/64 les/c/f=119/65/0 sis=120) [0] r=0 lpr=120 pi=[64,120)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v84: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  7 04:46:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:46:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f2c70c535b0>)]
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f2c70c53ac0>)]
Dec  7 04:46:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  7 04:46:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:12 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 04:46:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:12 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:12 np0005549474 python3.9[108583]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:46:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:13.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  7 04:46:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  7 04:46:13 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  7 04:46:13 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 121 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=120/121 n=5 ec=57/42 lis/c=118/64 les/c/f=119/65/0 sis=120) [0] r=0 lpr=120 pi=[64,120)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:46:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:13 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:46:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:14 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:14 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:14 np0005549474 python3.9[108872]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  7 04:46:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:46:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:15.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:46:15 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e34: compute-0.dotugk(active, since 92s), standbys: compute-2.ntknug, compute-1.buauyv
Dec  7 04:46:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:15 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:15 np0005549474 python3.9[109026]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  7 04:46:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v87: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  7 04:46:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec  7 04:46:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  7 04:46:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:16 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:16 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:16 np0005549474 python3.9[109203]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:46:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:16.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:46:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:16.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:46:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:16.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:46:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:17.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  7 04:46:17 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 122 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=122) [0] r=0 lpr=122 pi=[73,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:17 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  7 04:46:17 np0005549474 python3.9[109357]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  7 04:46:17 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  7 04:46:17 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 123 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[73,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:17 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 123 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[73,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:46:18 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  7 04:46:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  7 04:46:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:18 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:18 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  7 04:46:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  7 04:46:18 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  7 04:46:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:46:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:19.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:46:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:19 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:19 np0005549474 python3.9[109513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:46:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  7 04:46:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  7 04:46:19 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  7 04:46:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:19] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Dec  7 04:46:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:19] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Dec  7 04:46:19 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 125 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=123/73 les/c/f=124/74/0 sis=125) [0] r=0 lpr=125 pi=[73,125)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:19 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 125 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=123/73 les/c/f=124/74/0 sis=125) [0] r=0 lpr=125 pi=[73,125)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v93: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  7 04:46:20 np0005549474 python3.9[109665]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:46:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:20 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:20 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:20 np0005549474 python3.9[109743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:46:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  7 04:46:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  7 04:46:21 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  7 04:46:21 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 126 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=125/126 n=5 ec=57/42 lis/c=123/73 les/c/f=124/74/0 sis=125) [0] r=0 lpr=125 pi=[73,125)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:46:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:21.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:21 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:22 np0005549474 python3.9[109897]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:46:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 701 B/s rd, 0 op/s
Dec  7 04:46:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:22 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:22 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:23.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:23.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:23 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:23 np0005549474 python3.9[110053]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  7 04:46:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:46:24 np0005549474 python3.9[110206]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  7 04:46:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:24 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:24 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:25.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:25 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:25 np0005549474 python3.9[110361]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 04:46:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 416 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec  7 04:46:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec  7 04:46:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  7 04:46:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:26 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:26 np0005549474 python3.9[110513]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  7 04:46:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:26 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:26.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:46:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  7 04:46:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:27.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  7 04:46:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:27 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:46:27 np0005549474 python3.9[110667]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:46:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v99: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  7 04:46:28 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  7 04:46:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:28 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:28 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:29.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:29 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:29 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  7 04:46:29 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  7 04:46:29 np0005549474 python3.9[110822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:46:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:29] "GET /metrics HTTP/1.1" 200 48244 "" "Prometheus/2.51.0"
Dec  7 04:46:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:29] "GET /metrics HTTP/1.1" 200 48244 "" "Prometheus/2.51.0"
Dec  7 04:46:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  7 04:46:30 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  7 04:46:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:30 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:30 np0005549474 python3.9[110974]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:46:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:30 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:31.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:31 np0005549474 python3.9[111054]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:46:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:31 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  7 04:46:32 np0005549474 python3.9[111207]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:46:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  7 04:46:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:32 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b94000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:32 np0005549474 python3.9[111285]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:46:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:32 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:46:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:33.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:46:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:33 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:33 np0005549474 python3.9[111439]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:46:34 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  7 04:46:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v105: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:46:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec  7 04:46:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  7 04:46:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:34 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:34 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b94000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:35.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  7 04:46:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  7 04:46:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  7 04:46:35 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  7 04:46:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:35 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:35 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 131 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=131) [0] r=0 lpr=131 pi=[90,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:35 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  7 04:46:35 np0005549474 python3.9[111592]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  7 04:46:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 132 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=132) [0]/[1] r=-1 lpr=132 pi=[90,132)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:36 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 132 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=132) [0]/[1] r=-1 lpr=132 pi=[90,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:46:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  7 04:46:36 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  7 04:46:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:36 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:36 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:36.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:46:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:36.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:46:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:36.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:46:37 np0005549474 python3.9[111770]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  7 04:46:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:37.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  7 04:46:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:37 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b94000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  7 04:46:37 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  7 04:46:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 134 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=7 ec=57/42 lis/c=132/90 les/c/f=133/91/0 sis=134) [0] r=0 lpr=134 pi=[90,134)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:37 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 134 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=7 ec=57/42 lis/c=132/90 les/c/f=133/91/0 sis=134) [0] r=0 lpr=134 pi=[90,134)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:37 np0005549474 python3.9[111921]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:46:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v111: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  7 04:46:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  7 04:46:38 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  7 04:46:38 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 135 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=135) [0] r=0 lpr=135 pi=[97,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:38 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 135 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=134/135 n=7 ec=57/42 lis/c=132/90 les/c/f=133/91/0 sis=134) [0] r=0 lpr=134 pi=[90,134)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:46:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:38 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:39 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:39 np0005549474 python3.9[112159]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:46:39 np0005549474 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  7 04:46:39 np0005549474 systemd[1]: tuned.service: Deactivated successfully.
Dec  7 04:46:39 np0005549474 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  7 04:46:39 np0005549474 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  7 04:46:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:39] "GET /metrics HTTP/1.1" 200 48244 "" "Prometheus/2.51.0"
Dec  7 04:46:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:39] "GET /metrics HTTP/1.1" 200 48244 "" "Prometheus/2.51.0"
Dec  7 04:46:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  7 04:46:40 np0005549474 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  7 04:46:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 0 op/s; 27 B/s, 1 objects/s recovering
Dec  7 04:46:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:40 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:40 np0005549474 podman[112199]: 2025-12-07 09:46:40.549380344 +0000 UTC m=+1.027223990 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 04:46:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:40 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:40 np0005549474 python3.9[112392]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  7 04:46:41 np0005549474 podman[112199]: 2025-12-07 09:46:41.116071052 +0000 UTC m=+1.593914648 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:46:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:46:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:41.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:46:41 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  7 04:46:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  7 04:46:41 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  7 04:46:41 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 136 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=136) [0]/[1] r=-1 lpr=136 pi=[97,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:41 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 136 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=136) [0]/[1] r=-1 lpr=136 pi=[97,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 04:46:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:41 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:41 np0005549474 podman[112507]: 2025-12-07 09:46:41.734981225 +0000 UTC m=+0.084491509 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:41 np0005549474 podman[112532]: 2025-12-07 09:46:41.867396378 +0000 UTC m=+0.114606042 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:41 np0005549474 podman[112507]: 2025-12-07 09:46:41.932107888 +0000 UTC m=+0.281618052 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:46:42
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Some PGs (0.002967) are unknown; try again later
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 615 B/s rd, 0 op/s; 22 B/s, 1 objects/s recovering
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:46:42 np0005549474 podman[112596]: 2025-12-07 09:46:42.451014234 +0000 UTC m=+0.248571167 container exec a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:46:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:42 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  7 04:46:42 np0005549474 podman[112616]: 2025-12-07 09:46:42.5853261 +0000 UTC m=+0.113848880 container exec_died a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:46:42 np0005549474 podman[112596]: 2025-12-07 09:46:42.627502647 +0000 UTC m=+0.425059550 container exec_died a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:46:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  7 04:46:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:42 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b94002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  7 04:46:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  7 04:46:42 np0005549474 podman[112660]: 2025-12-07 09:46:42.917743007 +0000 UTC m=+0.107493615 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:46:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:43.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:43 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:43 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 04:46:43 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 04:46:43 np0005549474 podman[112660]: 2025-12-07 09:46:43.658013566 +0000 UTC m=+0.847764174 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:46:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  7 04:46:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  7 04:46:43 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  7 04:46:43 np0005549474 podman[112724]: 2025-12-07 09:46:43.899670983 +0000 UTC m=+0.083250125 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-type=git, version=2.2.4, architecture=x86_64, description=keepalived for Ceph, name=keepalived, release=1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  7 04:46:43 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 04:46:43 np0005549474 podman[112745]: 2025-12-07 09:46:43.972378024 +0000 UTC m=+0.054808967 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec  7 04:46:44 np0005549474 podman[112724]: 2025-12-07 09:46:44.036361014 +0000 UTC m=+0.219940136 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, name=keepalived, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  7 04:46:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  7 04:46:44 np0005549474 podman[112811]: 2025-12-07 09:46:44.455153581 +0000 UTC m=+0.255864280 container exec d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:44 np0005549474 podman[112811]: 2025-12-07 09:46:44.483563876 +0000 UTC m=+0.284274475 container exec_died d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:44 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:44 np0005549474 python3.9[112959]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:46:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:44 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:44 np0005549474 podman[112992]: 2025-12-07 09:46:44.8845471 +0000 UTC m=+0.169246523 container exec d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:46:45 np0005549474 podman[112992]: 2025-12-07 09:46:45.075541214 +0000 UTC m=+0.360240617 container exec_died d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:46:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:45.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:45.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:45 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b94002eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:45 np0005549474 podman[113262]: 2025-12-07 09:46:45.528477325 +0000 UTC m=+0.093438916 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:45 np0005549474 podman[113262]: 2025-12-07 09:46:45.557300942 +0000 UTC m=+0.122262483 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:46:45 np0005549474 python3.9[113247]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:46:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.117484) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806117520, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2990, "num_deletes": 252, "total_data_size": 6978548, "memory_usage": 7184392, "flush_reason": "Manual Compaction"}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  7 04:46:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806475640, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6694892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7921, "largest_seqno": 10910, "table_properties": {"data_size": 6680517, "index_size": 9205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4101, "raw_key_size": 37612, "raw_average_key_size": 23, "raw_value_size": 6648584, "raw_average_value_size": 4066, "num_data_blocks": 396, "num_entries": 1635, "num_filter_entries": 1635, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100663, "oldest_key_time": 1765100663, "file_creation_time": 1765100806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 358237 microseconds, and 11475 cpu microseconds.
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.475689) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6694892 bytes OK
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.475738) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.477540) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.477597) EVENT_LOG_v1 {"time_micros": 1765100806477586, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.477620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6964446, prev total WAL file size 7000985, number of live WAL files 2.
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.479283) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6537KB)], [23(11MB)]
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806479347, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18620222, "oldest_snapshot_seqno": -1}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:46 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:46 np0005549474 systemd[1]: session-39.scope: Deactivated successfully.
Dec  7 04:46:46 np0005549474 systemd[1]: session-39.scope: Consumed 1min 2.738s CPU time.
Dec  7 04:46:46 np0005549474 systemd-logind[796]: Session 39 logged out. Waiting for processes to exit.
Dec  7 04:46:46 np0005549474 systemd-logind[796]: Removed session 39.
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4138 keys, 14108570 bytes, temperature: kUnknown
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806737717, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14108570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14075358, "index_size": 21774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 105622, "raw_average_key_size": 25, "raw_value_size": 13994090, "raw_average_value_size": 3381, "num_data_blocks": 935, "num_entries": 4138, "num_filter_entries": 4138, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765100806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.738151) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14108570 bytes
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.739498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.0 rd, 54.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 11.4 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(4.9) write-amplify(2.1) OK, records in: 4676, records dropped: 538 output_compression: NoCompression
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.739514) EVENT_LOG_v1 {"time_micros": 1765100806739506, "job": 8, "event": "compaction_finished", "compaction_time_micros": 258641, "compaction_time_cpu_micros": 40315, "output_level": 6, "num_output_files": 1, "total_output_size": 14108570, "num_input_records": 4676, "num_output_records": 4138, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806740646, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806744819, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.479166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.744869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.744876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.744879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.744882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:46:46 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:46:46.744885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:46:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:46 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:46.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:46:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  7 04:46:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  7 04:46:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:47.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:46:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:47 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.703632792 +0000 UTC m=+0.060772271 container create 73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:46:47 np0005549474 systemd[1]: Started libpod-conmon-73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d.scope.
Dec  7 04:46:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.672751208 +0000 UTC m=+0.029890757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:46:47 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.79499354 +0000 UTC m=+0.152133079 container init 73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.804088992 +0000 UTC m=+0.161228491 container start 73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.807673402 +0000 UTC m=+0.164812891 container attach 73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:46:47 np0005549474 thirsty_gauss[113525]: 167 167
Dec  7 04:46:47 np0005549474 systemd[1]: libpod-73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d.scope: Deactivated successfully.
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.811266551 +0000 UTC m=+0.168406010 container died 73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:46:47 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ce1627233807ca0d1f059aabca010ca715d7b6b97b9cfc5dfd470c762dee30e4-merged.mount: Deactivated successfully.
Dec  7 04:46:47 np0005549474 podman[113509]: 2025-12-07 09:46:47.853618412 +0000 UTC m=+0.210757871 container remove 73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec  7 04:46:47 np0005549474 systemd[1]: libpod-conmon-73d0b5fd4eb96dc8cabdfd11abde89579731f8be11bb6b6a8ce58d0c3ba0683d.scope: Deactivated successfully.
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:48.01218393 +0000 UTC m=+0.046448637 container create 62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_panini, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:46:48 np0005549474 systemd[1]: Started libpod-conmon-62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08.scope.
Dec  7 04:46:48 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:46:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b303d9cc449032a3704516cb899cdf078624a4aa41f5b13557848ade58a4aa5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b303d9cc449032a3704516cb899cdf078624a4aa41f5b13557848ade58a4aa5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b303d9cc449032a3704516cb899cdf078624a4aa41f5b13557848ade58a4aa5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b303d9cc449032a3704516cb899cdf078624a4aa41f5b13557848ade58a4aa5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b303d9cc449032a3704516cb899cdf078624a4aa41f5b13557848ade58a4aa5d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:47.993234605 +0000 UTC m=+0.027499332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:48.094039284 +0000 UTC m=+0.128304011 container init 62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:48.101980394 +0000 UTC m=+0.136245101 container start 62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_panini, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:48.105036218 +0000 UTC m=+0.139300975 container attach 62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:46:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec  7 04:46:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  7 04:46:48 np0005549474 mystifying_panini[113564]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:46:48 np0005549474 mystifying_panini[113564]: --> All data devices are unavailable
Dec  7 04:46:48 np0005549474 systemd[1]: libpod-62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08.scope: Deactivated successfully.
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:48.424102566 +0000 UTC m=+0.458367293 container died 62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:46:48 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b303d9cc449032a3704516cb899cdf078624a4aa41f5b13557848ade58a4aa5d-merged.mount: Deactivated successfully.
Dec  7 04:46:48 np0005549474 podman[113548]: 2025-12-07 09:46:48.460573185 +0000 UTC m=+0.494837892 container remove 62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 04:46:48 np0005549474 systemd[1]: libpod-conmon-62d2dc9bfbcd7683a7ebcd0affe75cba234589f4d6c7158cb03a2a27ea475d08.scope: Deactivated successfully.
Dec  7 04:46:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:48 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:48 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:48 np0005549474 podman[113686]: 2025-12-07 09:46:48.978431312 +0000 UTC m=+0.038450475 container create 4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:46:49 np0005549474 systemd[1]: Started libpod-conmon-4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592.scope.
Dec  7 04:46:49 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:46:49 np0005549474 podman[113686]: 2025-12-07 09:46:49.04923272 +0000 UTC m=+0.109251923 container init 4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:46:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  7 04:46:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  7 04:46:49 np0005549474 podman[113686]: 2025-12-07 09:46:48.962679466 +0000 UTC m=+0.022698659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:46:49 np0005549474 podman[113686]: 2025-12-07 09:46:49.058942608 +0000 UTC m=+0.118961781 container start 4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:46:49 np0005549474 wonderful_curie[113702]: 167 167
Dec  7 04:46:49 np0005549474 systemd[1]: libpod-4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592.scope: Deactivated successfully.
Dec  7 04:46:49 np0005549474 podman[113686]: 2025-12-07 09:46:49.064244935 +0000 UTC m=+0.124264198 container attach 4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_curie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:46:49 np0005549474 conmon[113702]: conmon 4be76dab313b8a21321e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592.scope/container/memory.events
Dec  7 04:46:49 np0005549474 podman[113686]: 2025-12-07 09:46:49.065580763 +0000 UTC m=+0.125599936 container died 4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:46:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  7 04:46:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  7 04:46:49 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  7 04:46:49 np0005549474 systemd[1]: var-lib-containers-storage-overlay-de126c7147639582bed392efc0dbde9ed68a001d978253dc856d1f6930a7d52d-merged.mount: Deactivated successfully.
Dec  7 04:46:49 np0005549474 podman[113686]: 2025-12-07 09:46:49.102574406 +0000 UTC m=+0.162593579 container remove 4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:46:49 np0005549474 systemd[1]: libpod-conmon-4be76dab313b8a21321e73e304e0638c5b809277148503fb4756b2726695b592.scope: Deactivated successfully.
Dec  7 04:46:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:49.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.27296086 +0000 UTC m=+0.044418510 container create e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 04:46:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:49 np0005549474 systemd[1]: Started libpod-conmon-e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20.scope.
Dec  7 04:46:49 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:46:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7635953c55845ab19ece0a7c1e704257a07b62058493ac0256db4470f121118c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7635953c55845ab19ece0a7c1e704257a07b62058493ac0256db4470f121118c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7635953c55845ab19ece0a7c1e704257a07b62058493ac0256db4470f121118c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7635953c55845ab19ece0a7c1e704257a07b62058493ac0256db4470f121118c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.347454561 +0000 UTC m=+0.118912251 container init e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.254586771 +0000 UTC m=+0.026044451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.353046535 +0000 UTC m=+0.124504195 container start e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_feistel, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.356706307 +0000 UTC m=+0.128164007 container attach e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:46:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:49 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]: {
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:    "0": [
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:        {
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "devices": [
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "/dev/loop3"
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            ],
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "lv_name": "ceph_lv0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "lv_size": "21470642176",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "name": "ceph_lv0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "tags": {
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.cluster_name": "ceph",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.crush_device_class": "",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.encrypted": "0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.osd_id": "0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.type": "block",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.vdo": "0",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:                "ceph.with_tpm": "0"
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            },
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "type": "block",
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:            "vg_name": "ceph_vg0"
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:        }
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]:    ]
Dec  7 04:46:49 np0005549474 sweet_feistel[113743]: }
Dec  7 04:46:49 np0005549474 systemd[1]: libpod-e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20.scope: Deactivated successfully.
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.65138649 +0000 UTC m=+0.422844250 container died e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_feistel, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:46:49 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7635953c55845ab19ece0a7c1e704257a07b62058493ac0256db4470f121118c-merged.mount: Deactivated successfully.
Dec  7 04:46:49 np0005549474 podman[113727]: 2025-12-07 09:46:49.702093012 +0000 UTC m=+0.473550692 container remove e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:46:49 np0005549474 systemd[1]: libpod-conmon-e05f6d626fed033b299e9d1cb8ce5130bea0a11dd913d4737f7e222c9b1b3c20.scope: Deactivated successfully.
Dec  7 04:46:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:49] "GET /metrics HTTP/1.1" 200 48246 "" "Prometheus/2.51.0"
Dec  7 04:46:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:49] "GET /metrics HTTP/1.1" 200 48246 "" "Prometheus/2.51.0"
Dec  7 04:46:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.251221874 +0000 UTC m=+0.036467419 container create f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:46:50 np0005549474 systemd[1]: Started libpod-conmon-f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663.scope.
Dec  7 04:46:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.320945094 +0000 UTC m=+0.106190679 container init f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.327780762 +0000 UTC m=+0.113026297 container start f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 04:46:50 np0005549474 pensive_boyd[113873]: 167 167
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.331257718 +0000 UTC m=+0.116503283 container attach f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:46:50 np0005549474 systemd[1]: libpod-f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663.scope: Deactivated successfully.
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.236142327 +0000 UTC m=+0.021387882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.332130153 +0000 UTC m=+0.117375698 container died f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:46:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Dec  7 04:46:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec  7 04:46:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  7 04:46:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5b8c3b7f4f1a4a50d68f08c7399a3447ff74b70dba5058e565eb5e7944418f58-merged.mount: Deactivated successfully.
Dec  7 04:46:50 np0005549474 podman[113856]: 2025-12-07 09:46:50.366874834 +0000 UTC m=+0.152120369 container remove f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:46:50 np0005549474 systemd[1]: libpod-conmon-f02050fd598ea876a9f7085f0323df1a9319fa537957dab08deef1d87ae60663.scope: Deactivated successfully.
Dec  7 04:46:50 np0005549474 podman[113900]: 2025-12-07 09:46:50.499241546 +0000 UTC m=+0.035384780 container create c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 04:46:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:50 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:50 np0005549474 systemd[1]: Started libpod-conmon-c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244.scope.
Dec  7 04:46:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:46:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadaa34c9c55eea40c7764db911f69c728ac6be8de7ee35082c69de9833bb878/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadaa34c9c55eea40c7764db911f69c728ac6be8de7ee35082c69de9833bb878/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadaa34c9c55eea40c7764db911f69c728ac6be8de7ee35082c69de9833bb878/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadaa34c9c55eea40c7764db911f69c728ac6be8de7ee35082c69de9833bb878/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:46:50 np0005549474 podman[113900]: 2025-12-07 09:46:50.484257831 +0000 UTC m=+0.020401085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:46:50 np0005549474 podman[113900]: 2025-12-07 09:46:50.586407858 +0000 UTC m=+0.122551112 container init c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_elbakyan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 04:46:50 np0005549474 podman[113900]: 2025-12-07 09:46:50.596529537 +0000 UTC m=+0.132672761 container start c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:46:50 np0005549474 podman[113900]: 2025-12-07 09:46:50.599406027 +0000 UTC m=+0.135549351 container attach c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:46:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:50 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  7 04:46:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:51.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:51 np0005549474 lvm[113994]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:46:51 np0005549474 lvm[113994]: VG ceph_vg0 finished
Dec  7 04:46:51 np0005549474 angry_elbakyan[113918]: {}
Dec  7 04:46:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:51 np0005549474 systemd[1]: libpod-c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244.scope: Deactivated successfully.
Dec  7 04:46:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:46:51 np0005549474 podman[113900]: 2025-12-07 09:46:51.283266867 +0000 UTC m=+0.819410121 container died c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:46:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:51.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:46:51 np0005549474 systemd[1]: libpod-c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244.scope: Consumed 1.042s CPU time.
Dec  7 04:46:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-aadaa34c9c55eea40c7764db911f69c728ac6be8de7ee35082c69de9833bb878-merged.mount: Deactivated successfully.
Dec  7 04:46:51 np0005549474 podman[113900]: 2025-12-07 09:46:51.326866463 +0000 UTC m=+0.863009697 container remove c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 04:46:51 np0005549474 systemd[1]: libpod-conmon-c6d2309a296c4421f87dd5d0d46fbd60e74a8f94cba1255d1dc87a6a99840244.scope: Deactivated successfully.
Dec  7 04:46:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:51 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:46:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:51 np0005549474 systemd-logind[796]: New session 40 of user zuul.
Dec  7 04:46:51 np0005549474 systemd[1]: Started Session 40 of User zuul.
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec  7 04:46:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 587 B/s rd, 0 op/s
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:46:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:52 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:52 np0005549474 python3.9[114187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:46:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:52 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec  7 04:46:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:53.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 04:46:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:46:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec  7 04:46:53 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec  7 04:46:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:53 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec  7 04:46:54 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 04:46:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec  7 04:46:54 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec  7 04:46:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec  7 04:46:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:54 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:54 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:54 np0005549474 python3.9[114345]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  7 04:46:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:55.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec  7 04:46:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:55 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:55 np0005549474 python3.9[114500]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:46:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 04:46:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:56 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec  7 04:46:56 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec  7 04:46:56 np0005549474 python3.9[114609]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  7 04:46:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:56 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba40028d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:46:56.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:46:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:46:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:57.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:46:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:57.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:57 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:46:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:46:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec  7 04:46:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec  7 04:46:57 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec  7 04:46:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:46:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 04:46:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:58 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec  7 04:46:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec  7 04:46:58 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec  7 04:46:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:58 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:59 np0005549474 python3.9[114766]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:46:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:59.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:46:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:46:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:46:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:46:59 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba40028d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:46:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:59] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:46:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:46:59] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:47:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  7 04:47:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:00 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba40028d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:00 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:01 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:01 np0005549474 python3.9[114922]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:47:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  7 04:47:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:02 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba40028d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:02 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:03.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:03 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b880010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:03 np0005549474 python3.9[115079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:47:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 393 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Dec  7 04:47:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:04 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:04 np0005549474 python3.9[115232]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  7 04:47:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:04 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba40028d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:05.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:05.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:05 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:05 np0005549474 python3.9[115384]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:47:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 350 B/s rd, 0 op/s; 12 B/s, 0 objects/s recovering
Dec  7 04:47:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:06 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88001e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:06 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:06.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:47:07 np0005549474 python3.9[115543]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:47:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:07.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:07.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:07 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec  7 04:47:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:08 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:08 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:09.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:09.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:09 np0005549474 python3.9[115699]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:47:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:09 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:09] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:47:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:09] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:47:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 523 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec  7 04:47:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:10 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:10 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:11 np0005549474 python3.9[115989]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  7 04:47:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:11.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:11.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:11 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88001e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:12 np0005549474 python3.9[116140]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:47:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:47:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:47:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:12 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:12 np0005549474 python3.9[116294]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:47:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:12 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:13.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:13.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:13 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:14 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:14 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:14 np0005549474 python3.9[116450]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:47:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:15.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:15 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:16 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:16 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:16.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:47:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:16.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:47:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:16.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:47:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:17.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:17 np0005549474 python3.9[116631]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:47:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:17.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:17 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:18 np0005549474 python3.9[116786]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  7 04:47:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:18 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b88002180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:18 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:19.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:19 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:19] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  7 04:47:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:19] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  7 04:47:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:47:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:20 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:20 np0005549474 systemd[1]: session-40.scope: Deactivated successfully.
Dec  7 04:47:20 np0005549474 systemd[1]: session-40.scope: Consumed 17.154s CPU time.
Dec  7 04:47:20 np0005549474 systemd-logind[796]: Session 40 logged out. Waiting for processes to exit.
Dec  7 04:47:20 np0005549474 systemd-logind[796]: Removed session 40.
Dec  7 04:47:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:20 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b880032a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:21.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:21.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:21 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:22 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:22 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2ba4004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:23.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:23 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b880032a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:24 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:24 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:25 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b940040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:25 np0005549474 systemd-logind[796]: New session 41 of user zuul.
Dec  7 04:47:25 np0005549474 systemd[1]: Started Session 41 of User zuul.
Dec  7 04:47:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:26 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2bac001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:26 np0005549474 python3.9[116975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:47:26 np0005549474 kernel: ganesha.nfsd[108346]: segfault at 50 ip 00007f2c5fc0d32e sp 00007f2c17ffe210 error 4 in libntirpc.so.5.8[7f2c5fbf2000+2c000] likely on CPU 2 (core 0, socket 2)
Dec  7 04:47:26 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:47:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[108210]: 07/12/2025 09:47:26 : epoch 69354cc2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2b8c003e20 fd 38 proxy ignored for local
Dec  7 04:47:26 np0005549474 systemd[1]: Started Process Core Dump (PID 116981/UID 0).
Dec  7 04:47:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:26.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:47:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094727 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:47:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:47:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:47:27 np0005549474 python3.9[117133]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:47:28 np0005549474 systemd-coredump[116982]: Process 108215 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f2c5fc0d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:47:28 np0005549474 systemd[1]: systemd-coredump@1-116981-0.service: Deactivated successfully.
Dec  7 04:47:28 np0005549474 systemd[1]: systemd-coredump@1-116981-0.service: Consumed 1.186s CPU time.
Dec  7 04:47:28 np0005549474 podman[117181]: 2025-12-07 09:47:28.198409845 +0000 UTC m=+0.025493113 container died a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:47:28 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d63653c0a3e7cd94ebf74554bf7d979614d69d71f2ac5e7280f38fada6cca255-merged.mount: Deactivated successfully.
Dec  7 04:47:28 np0005549474 podman[117181]: 2025-12-07 09:47:28.280602045 +0000 UTC m=+0.107685303 container remove a55418e417c7a6530d3327ad5a84642c87dba2409a9d1f66465b74e6f85dc7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:47:28 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:47:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:47:28 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:47:28 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.629s CPU time.
Dec  7 04:47:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:29 np0005549474 python3.9[117375]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:47:29 np0005549474 systemd-logind[796]: Session 41 logged out. Waiting for processes to exit.
Dec  7 04:47:29 np0005549474 systemd[1]: session-41.scope: Deactivated successfully.
Dec  7 04:47:29 np0005549474 systemd[1]: session-41.scope: Consumed 2.249s CPU time.
Dec  7 04:47:29 np0005549474 systemd-logind[796]: Removed session 41.
Dec  7 04:47:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:29] "GET /metrics HTTP/1.1" 200 48181 "" "Prometheus/2.51.0"
Dec  7 04:47:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:29] "GET /metrics HTTP/1.1" 200 48181 "" "Prometheus/2.51.0"
Dec  7 04:47:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:47:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:47:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094732 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:47:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:47:34 np0005549474 systemd-logind[796]: New session 42 of user zuul.
Dec  7 04:47:35 np0005549474 systemd[1]: Started Session 42 of User zuul.
Dec  7 04:47:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:47:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:47:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:36 np0005549474 python3.9[117560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:47:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:47:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:36.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:47:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:37.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:37 np0005549474 python3.9[117740]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:47:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:38 np0005549474 python3.9[117897]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:47:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:47:38 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 2.
Dec  7 04:47:38 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:47:38 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.629s CPU time.
Dec  7 04:47:38 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:47:38 np0005549474 podman[117975]: 2025-12-07 09:47:38.845242116 +0000 UTC m=+0.039323138 container create 7a7c700412f9120e1d9a345cfd362d0242d02fda114be02a9b7c3782deef35b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:47:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3891aa34f62b012f42f15ff3af48657b0af02694cd4922a6f5a17d6a4d406/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:47:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3891aa34f62b012f42f15ff3af48657b0af02694cd4922a6f5a17d6a4d406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:47:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3891aa34f62b012f42f15ff3af48657b0af02694cd4922a6f5a17d6a4d406/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:47:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3891aa34f62b012f42f15ff3af48657b0af02694cd4922a6f5a17d6a4d406/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:47:38 np0005549474 podman[117975]: 2025-12-07 09:47:38.826179379 +0000 UTC m=+0.020260421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:47:39 np0005549474 podman[117975]: 2025-12-07 09:47:39.022919318 +0000 UTC m=+0.217000390 container init 7a7c700412f9120e1d9a345cfd362d0242d02fda114be02a9b7c3782deef35b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 04:47:39 np0005549474 podman[117975]: 2025-12-07 09:47:39.029384644 +0000 UTC m=+0.223465676 container start 7a7c700412f9120e1d9a345cfd362d0242d02fda114be02a9b7c3782deef35b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:47:39 np0005549474 bash[117975]: 7a7c700412f9120e1d9a345cfd362d0242d02fda114be02a9b7c3782deef35b9
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:47:39 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:47:39 np0005549474 python3.9[118045]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:47:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:39.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:47:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:47:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:39] "GET /metrics HTTP/1.1" 200 48181 "" "Prometheus/2.51.0"
Dec  7 04:47:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:39] "GET /metrics HTTP/1.1" 200 48181 "" "Prometheus/2.51.0"
Dec  7 04:47:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:47:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:41 np0005549474 python3.9[118239]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:47:42
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes']
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:47:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:47:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:47:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:47:42 np0005549474 python3.9[118435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:47:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:43.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:43 np0005549474 python3.9[118589]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:47:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:47:44 np0005549474 python3.9[118754]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:47:45 np0005549474 python3.9[118833]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:47:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:47:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:47:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 04:47:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 04:47:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:47:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:47:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:47:45 np0005549474 python3.9[118986]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:47:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:47:46 np0005549474 python3.9[119064]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:47:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:46.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:47:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.005000135s ======
Dec  7 04:47:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:47.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000135s
Dec  7 04:47:47 np0005549474 python3.9[119218]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:47:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:48 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:47:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:48 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:47:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:48 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:47:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:47:48 np0005549474 python3.9[119370]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:47:49 np0005549474 python3.9[119523]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:47:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:47:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:49.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:47:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094749 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:47:49 np0005549474 python3.9[119676]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:47:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:49] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:47:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:49] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:47:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec  7 04:47:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:51.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:52 np0005549474 python3.9[119894]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:47:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec  7 04:47:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:47:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:47:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:47:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:47:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:47:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:53.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000006:nfs.cephfs.2: -2
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:47:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:47:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:47:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:55 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd854000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:55.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:55 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:55 np0005549474 podman[120174]: 2025-12-07 09:47:55.39497677 +0000 UTC m=+0.023739414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:47:55 np0005549474 python3.9[120135]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:47:55 np0005549474 podman[120174]: 2025-12-07 09:47:55.600119977 +0000 UTC m=+0.228882601 container create 0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_black, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:47:55 np0005549474 systemd[92462]: Created slice User Background Tasks Slice.
Dec  7 04:47:55 np0005549474 systemd[92462]: Starting Cleanup of User's Temporary Files and Directories...
Dec  7 04:47:55 np0005549474 systemd[92462]: Finished Cleanup of User's Temporary Files and Directories.
Dec  7 04:47:55 np0005549474 systemd[1]: Started libpod-conmon-0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b.scope.
Dec  7 04:47:55 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:47:56 np0005549474 python3.9[120347]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:47:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:47:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:56 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:56.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:47:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:47:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:47:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:57 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:47:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:57.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:47:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 04:47:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:57.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 04:47:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:57 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:47:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:47:58 np0005549474 python3.9[120526]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:47:58 np0005549474 podman[120174]: 2025-12-07 09:47:58.239024067 +0000 UTC m=+2.867786721 container init 0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:47:58 np0005549474 podman[120174]: 2025-12-07 09:47:58.24869805 +0000 UTC m=+2.877460674 container start 0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:47:58 np0005549474 systemd[1]: libpod-0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b.scope: Deactivated successfully.
Dec  7 04:47:58 np0005549474 beautiful_black[120217]: 167 167
Dec  7 04:47:58 np0005549474 conmon[120217]: conmon 0c36f43fb1bfff449020 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b.scope/container/memory.events
Dec  7 04:47:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:47:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 04:47:58 np0005549474 podman[120174]: 2025-12-07 09:47:58.508615592 +0000 UTC m=+3.137378236 container attach 0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_black, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:47:58 np0005549474 podman[120174]: 2025-12-07 09:47:58.50922873 +0000 UTC m=+3.137991384 container died 0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 04:47:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094758 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:47:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:58 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:59 np0005549474 python3.9[120692]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:47:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:59 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:47:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:47:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:47:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:47:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:47:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:47:59 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:47:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-adefe7967969be1083d02974144a6df33101cf34c86c1e8d1b1443b76afcc240-merged.mount: Deactivated successfully.
Dec  7 04:47:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:47:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:47:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:47:59 np0005549474 podman[120174]: 2025-12-07 09:47:59.821151737 +0000 UTC m=+4.449914361 container remove 0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:47:59 np0005549474 systemd[1]: libpod-conmon-0c36f43fb1bfff4490204a8ccad2061a99f3cc9615e2e473f73d5248b59f7c7b.scope: Deactivated successfully.
Dec  7 04:47:59 np0005549474 podman[120855]: 2025-12-07 09:47:59.984706827 +0000 UTC m=+0.066429290 container create 682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:47:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:59] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:47:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:47:59] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:00 np0005549474 podman[120855]: 2025-12-07 09:47:59.941746074 +0000 UTC m=+0.023468557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:48:00 np0005549474 python3.9[120849]: ansible-service_facts Invoked
Dec  7 04:48:00 np0005549474 network[120886]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:48:00 np0005549474 network[120887]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:48:00 np0005549474 network[120888]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:48:00 np0005549474 systemd[1]: Started libpod-conmon-682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57.scope.
Dec  7 04:48:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 04:48:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:00 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:48:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1343bd194112fe803537372c2048114b83571219adb115fa3a8e6e0d34921d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1343bd194112fe803537372c2048114b83571219adb115fa3a8e6e0d34921d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1343bd194112fe803537372c2048114b83571219adb115fa3a8e6e0d34921d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1343bd194112fe803537372c2048114b83571219adb115fa3a8e6e0d34921d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1343bd194112fe803537372c2048114b83571219adb115fa3a8e6e0d34921d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:00 np0005549474 podman[120855]: 2025-12-07 09:48:00.904126626 +0000 UTC m=+0.985849159 container init 682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:48:00 np0005549474 podman[120855]: 2025-12-07 09:48:00.917291852 +0000 UTC m=+0.999014315 container start 682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 04:48:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:01 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:01.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:01 np0005549474 festive_hawking[120896]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:48:01 np0005549474 festive_hawking[120896]: --> All data devices are unavailable
Dec  7 04:48:01 np0005549474 systemd[1]: libpod-682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57.scope: Deactivated successfully.
Dec  7 04:48:01 np0005549474 podman[120855]: 2025-12-07 09:48:01.387145491 +0000 UTC m=+1.468867954 container attach 682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:48:01 np0005549474 podman[120855]: 2025-12-07 09:48:01.388543389 +0000 UTC m=+1.470265852 container died 682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:48:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:01.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:01 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828001040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:01 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d1343bd194112fe803537372c2048114b83571219adb115fa3a8e6e0d34921d1-merged.mount: Deactivated successfully.
Dec  7 04:48:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:48:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:02 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:03 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:03.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:03.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:03 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:04 np0005549474 podman[120855]: 2025-12-07 09:48:04.061745961 +0000 UTC m=+4.143468434 container remove 682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:48:04 np0005549474 systemd[1]: libpod-conmon-682309e3cb155ecbbc32ec742dd1e231066daa26f9a5250ae3d1ab5ca140dc57.scope: Deactivated successfully.
Dec  7 04:48:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:48:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:04 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:04 np0005549474 podman[121144]: 2025-12-07 09:48:04.688252983 +0000 UTC m=+0.039377268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:48:05 np0005549474 podman[121144]: 2025-12-07 09:48:05.001994632 +0000 UTC m=+0.353118827 container create f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:48:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:05 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:05 np0005549474 systemd[1]: Started libpod-conmon-f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f.scope.
Dec  7 04:48:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:05.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:48:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:05.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:05 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828001b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:06 np0005549474 python3.9[121482]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:48:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:48:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:06 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:06 np0005549474 podman[121144]: 2025-12-07 09:48:06.828885536 +0000 UTC m=+2.180009811 container init f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hamilton, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:48:06 np0005549474 podman[121144]: 2025-12-07 09:48:06.838025654 +0000 UTC m=+2.189149839 container start f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hamilton, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:48:06 np0005549474 zen_hamilton[121328]: 167 167
Dec  7 04:48:06 np0005549474 systemd[1]: libpod-f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f.scope: Deactivated successfully.
Dec  7 04:48:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:06.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:48:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:06.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:48:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:06.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:48:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:07 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:07.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:07 np0005549474 podman[121144]: 2025-12-07 09:48:07.351793002 +0000 UTC m=+2.702917197 container attach f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 04:48:07 np0005549474 podman[121144]: 2025-12-07 09:48:07.352914393 +0000 UTC m=+2.704038608 container died f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hamilton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:48:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:07.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:07 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:48:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:08 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828001b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:09 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:09.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:09 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4f00c6bd1c91d495139c748d78992610094f2252047ac9f915f28554a4716c00-merged.mount: Deactivated successfully.
Dec  7 04:48:09 np0005549474 python3.9[121653]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  7 04:48:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:09] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:09] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:48:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:10 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:10 np0005549474 podman[121144]: 2025-12-07 09:48:10.813757561 +0000 UTC m=+6.164881756 container remove f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hamilton, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:48:10 np0005549474 systemd[1]: libpod-conmon-f17406823411551773b934b5b3059b9237939e0ca840b1ad59076b45bdd7014f.scope: Deactivated successfully.
Dec  7 04:48:11 np0005549474 podman[121739]: 2025-12-07 09:48:11.03889565 +0000 UTC m=+0.084221003 container create 6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:48:11 np0005549474 podman[121739]: 2025-12-07 09:48:10.978892425 +0000 UTC m=+0.024217748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:48:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:11 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828001b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:11 np0005549474 systemd[1]: Started libpod-conmon-6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55.scope.
Dec  7 04:48:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:48:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cec1cf7b8f95e2a5db9c8291a32614ae714cbfa05f383ece28c7f52a8d52f29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cec1cf7b8f95e2a5db9c8291a32614ae714cbfa05f383ece28c7f52a8d52f29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cec1cf7b8f95e2a5db9c8291a32614ae714cbfa05f383ece28c7f52a8d52f29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cec1cf7b8f95e2a5db9c8291a32614ae714cbfa05f383ece28c7f52a8d52f29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:11 np0005549474 podman[121739]: 2025-12-07 09:48:11.158942642 +0000 UTC m=+0.204267975 container init 6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:48:11 np0005549474 podman[121739]: 2025-12-07 09:48:11.168938834 +0000 UTC m=+0.214264137 container start 6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:48:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:11.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:11 np0005549474 python3.9[121836]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:11 np0005549474 podman[121739]: 2025-12-07 09:48:11.374381849 +0000 UTC m=+0.419707212 container attach 6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:48:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:11.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]: {
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:    "0": [
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:        {
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "devices": [
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "/dev/loop3"
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            ],
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "lv_name": "ceph_lv0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "lv_size": "21470642176",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "name": "ceph_lv0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "tags": {
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.cluster_name": "ceph",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.crush_device_class": "",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.encrypted": "0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.osd_id": "0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.type": "block",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.vdo": "0",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:                "ceph.with_tpm": "0"
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            },
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "type": "block",
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:            "vg_name": "ceph_vg0"
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:        }
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]:    ]
Dec  7 04:48:11 np0005549474 objective_sanderson[121802]: }
Dec  7 04:48:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:11 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:11 np0005549474 systemd[1]: libpod-6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55.scope: Deactivated successfully.
Dec  7 04:48:11 np0005549474 podman[121739]: 2025-12-07 09:48:11.457733687 +0000 UTC m=+0.503058990 container died 6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:48:11 np0005549474 python3.9[121932]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7cec1cf7b8f95e2a5db9c8291a32614ae714cbfa05f383ece28c7f52a8d52f29-merged.mount: Deactivated successfully.
Dec  7 04:48:12 np0005549474 podman[121739]: 2025-12-07 09:48:12.327964614 +0000 UTC m=+1.373289947 container remove 6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:48:12 np0005549474 systemd[1]: libpod-conmon-6d7482a183575a67400bba5d076f2fd3162bb398de4c0c3ae563468fa129cf55.scope: Deactivated successfully.
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:48:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:48:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:48:12 np0005549474 python3.9[122084]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:12 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:12 np0005549474 podman[122255]: 2025-12-07 09:48:12.91880144 +0000 UTC m=+0.050105178 container create 80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_herschel, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:48:12 np0005549474 podman[122255]: 2025-12-07 09:48:12.893964028 +0000 UTC m=+0.025267796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:48:13 np0005549474 systemd[1]: Started libpod-conmon-80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474.scope.
Dec  7 04:48:13 np0005549474 python3.9[122254]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:48:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:13 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:13 np0005549474 podman[122255]: 2025-12-07 09:48:13.174926009 +0000 UTC m=+0.306229777 container init 80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 04:48:13 np0005549474 podman[122255]: 2025-12-07 09:48:13.185349772 +0000 UTC m=+0.316653510 container start 80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:48:13 np0005549474 fervent_herschel[122271]: 167 167
Dec  7 04:48:13 np0005549474 systemd[1]: libpod-80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474.scope: Deactivated successfully.
Dec  7 04:48:13 np0005549474 conmon[122271]: conmon 80e53bf85753f09fc43e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474.scope/container/memory.events
Dec  7 04:48:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:13.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:13 np0005549474 podman[122255]: 2025-12-07 09:48:13.260291593 +0000 UTC m=+0.391595331 container attach 80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:48:13 np0005549474 podman[122255]: 2025-12-07 09:48:13.261150766 +0000 UTC m=+0.392454514 container died 80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_herschel, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:48:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:13.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:13 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:13 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ec4c8f2604a55d4c2c54ab7ccd23ed575e7fc9d818b9960726e7d0e33c7c496a-merged.mount: Deactivated successfully.
Dec  7 04:48:13 np0005549474 podman[122255]: 2025-12-07 09:48:13.523510733 +0000 UTC m=+0.654814471 container remove 80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_herschel, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  7 04:48:13 np0005549474 systemd[1]: libpod-conmon-80e53bf85753f09fc43eed8b1955be6abfee1592e7baa93005622d391694e474.scope: Deactivated successfully.
Dec  7 04:48:13 np0005549474 podman[122323]: 2025-12-07 09:48:13.657901634 +0000 UTC m=+0.025956013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:48:13 np0005549474 podman[122323]: 2025-12-07 09:48:13.780139276 +0000 UTC m=+0.148193645 container create 39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_gould, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 04:48:13 np0005549474 systemd[1]: Started libpod-conmon-39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2.scope.
Dec  7 04:48:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:48:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1f970b017b349123f6dbc572f023d8f5e798bae849277537957420e2bffb3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1f970b017b349123f6dbc572f023d8f5e798bae849277537957420e2bffb3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1f970b017b349123f6dbc572f023d8f5e798bae849277537957420e2bffb3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1f970b017b349123f6dbc572f023d8f5e798bae849277537957420e2bffb3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:48:14 np0005549474 podman[122323]: 2025-12-07 09:48:14.156279807 +0000 UTC m=+0.524334266 container init 39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:48:14 np0005549474 podman[122323]: 2025-12-07 09:48:14.165331702 +0000 UTC m=+0.533386071 container start 39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:48:14 np0005549474 podman[122323]: 2025-12-07 09:48:14.169863825 +0000 UTC m=+0.537918224 container attach 39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_gould, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:48:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:14 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:14 np0005549474 lvm[122468]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:48:14 np0005549474 lvm[122468]: VG ceph_vg0 finished
Dec  7 04:48:14 np0005549474 festive_gould[122339]: {}
Dec  7 04:48:14 np0005549474 systemd[1]: libpod-39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2.scope: Deactivated successfully.
Dec  7 04:48:14 np0005549474 podman[122323]: 2025-12-07 09:48:14.886725566 +0000 UTC m=+1.254779955 container died 39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:48:14 np0005549474 systemd[1]: libpod-39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2.scope: Consumed 1.110s CPU time.
Dec  7 04:48:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ca1f970b017b349123f6dbc572f023d8f5e798bae849277537957420e2bffb3c-merged.mount: Deactivated successfully.
Dec  7 04:48:14 np0005549474 podman[122323]: 2025-12-07 09:48:14.968535472 +0000 UTC m=+1.336589841 container remove 39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_gould, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:48:15 np0005549474 systemd[1]: libpod-conmon-39db91fcfc18e0ff241d974d30564a4fce236f90cde28c7102ed88993e777ab2.scope: Deactivated successfully.
Dec  7 04:48:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:48:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:48:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:48:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:48:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:15 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:15 np0005549474 python3.9[122559]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:48:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:48:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:15.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:15 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:48:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:16 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:16.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:48:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:16.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:48:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:17 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:17.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:17.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:17 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:48:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:48:18 np0005549474 python3.9[122762]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:48:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:18 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:18 np0005549474 python3.9[122848]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:48:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:19 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:19.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:19 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:19 np0005549474 systemd[1]: session-42.scope: Deactivated successfully.
Dec  7 04:48:19 np0005549474 systemd[1]: session-42.scope: Consumed 22.861s CPU time.
Dec  7 04:48:19 np0005549474 systemd-logind[796]: Session 42 logged out. Waiting for processes to exit.
Dec  7 04:48:19 np0005549474 systemd-logind[796]: Removed session 42.
Dec  7 04:48:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:19] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:48:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:19] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:48:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:48:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:20 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:21 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:21.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:21 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:22 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:23 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:23.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:23 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:24 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:25 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:25.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:25 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:25 np0005549474 systemd-logind[796]: New session 43 of user zuul.
Dec  7 04:48:25 np0005549474 systemd[1]: Started Session 43 of User zuul.
Dec  7 04:48:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:48:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:26 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8340023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:26 np0005549474 python3.9[123040]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:48:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:48:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:48:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:27 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:27.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:27 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:48:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:48:27 np0005549474 python3.9[123196]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:28 np0005549474 python3.9[123274]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:28 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:28 np0005549474 systemd[1]: session-43.scope: Deactivated successfully.
Dec  7 04:48:28 np0005549474 systemd[1]: session-43.scope: Consumed 1.594s CPU time.
Dec  7 04:48:28 np0005549474 systemd-logind[796]: Session 43 logged out. Waiting for processes to exit.
Dec  7 04:48:28 np0005549474 systemd-logind[796]: Removed session 43.
Dec  7 04:48:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:29 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:29.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:29 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:29] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:29] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:48:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:30 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:31 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:31.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:48:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:48:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:31 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:32 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:33 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:33.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:33 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:34 np0005549474 systemd-logind[796]: New session 44 of user zuul.
Dec  7 04:48:34 np0005549474 systemd[1]: Started Session 44 of User zuul.
Dec  7 04:48:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:34 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:35 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:35.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:35.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:35 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:36 np0005549474 python3.9[123461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:48:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:48:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:36 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:36.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:48:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:37 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:37.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:37.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:37 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:37 np0005549474 python3.9[123644]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:38 np0005549474 python3.9[123819]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:38 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:38 np0005549474 python3.9[123897]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.6jones93 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:39.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd828002ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:39] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:39] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec  7 04:48:40 np0005549474 python3.9[124051]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:48:40 np0005549474 python3.9[124129]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ffr6wwli recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:40 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd83c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:41 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:41.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:41 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:42 np0005549474 python3.9[124283]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:48:42
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'backups']
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:48:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:48:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:48:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:42 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:48:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:48:42 np0005549474 python3.9[124435]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:43 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:43.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:43 np0005549474 python3.9[124516]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:48:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:43 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:43.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:48:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:44 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:44 np0005549474 python3.9[124668]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824000ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:45 np0005549474 python3.9[124747]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:48:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:45.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:45 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:45 np0005549474 python3.9[124900]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:48:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:46 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:46.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:48:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:46.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:48:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:46.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:48:47 np0005549474 python3.9[125053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:47 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:47.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:47 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824000ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:47 np0005549474 python3.9[125132]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:48 np0005549474 python3.9[125284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:48 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:48 np0005549474 python3.9[125362]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:49 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:49 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:49.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:49] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:48:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:49] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:48:50 np0005549474 python3.9[125516]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:48:50 np0005549474 systemd[1]: Reloading.
Dec  7 04:48:50 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:48:50 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:48:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:48:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:50 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:51 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:51.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:51 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:51.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:52 np0005549474 python3.9[125708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:52 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:53 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:53.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:53 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:53 np0005549474 python3.9[125788]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:54 np0005549474 python3.9[125940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:48:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:48:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:54 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:54 np0005549474 python3.9[126018]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:48:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:55 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:55.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:55 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:55.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:55 np0005549474 python3.9[126172]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:48:55 np0005549474 systemd[1]: Reloading.
Dec  7 04:48:55 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:48:55 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:48:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:48:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:56 np0005549474 systemd[1]: Starting Create netns directory...
Dec  7 04:48:56 np0005549474 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  7 04:48:56 np0005549474 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  7 04:48:56 np0005549474 systemd[1]: Finished Create netns directory.
Dec  7 04:48:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:56 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:48:56.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:48:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:57 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:57.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:57 np0005549474 python3.9[126390]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:48:57 np0005549474 network[126408]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:48:57 np0005549474 network[126409]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:48:57 np0005549474 network[126410]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:48:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:57 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:48:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:57.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:48:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:48:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:48:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:48:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:58 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:59 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:48:59 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:48:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:48:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:48:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:59.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:48:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:59] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  7 04:48:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:48:59] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  7 04:49:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:49:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:00 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:01 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:01 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:49:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:02 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:03 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:03 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:03 np0005549474 python3.9[126678]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:04 np0005549474 python3.9[126756]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:49:04 np0005549474 systemd[1]: session-18.scope: Deactivated successfully.
Dec  7 04:49:04 np0005549474 systemd[1]: session-18.scope: Consumed 1min 34.770s CPU time.
Dec  7 04:49:04 np0005549474 systemd-logind[796]: Session 18 logged out. Waiting for processes to exit.
Dec  7 04:49:04 np0005549474 systemd-logind[796]: Removed session 18.
Dec  7 04:49:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:04 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:05 np0005549474 python3.9[126909]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:05 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:05 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:05 np0005549474 python3.9[127062]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:06 np0005549474 python3.9[127140]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:49:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:06 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:06.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:49:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:49:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:06.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:49:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:07 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:07 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:07 np0005549474 python3.9[127294]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  7 04:49:07 np0005549474 systemd[1]: Starting Time & Date Service...
Dec  7 04:49:07 np0005549474 systemd[1]: Started Time & Date Service.
Dec  7 04:49:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:49:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:08 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:08 np0005549474 python3.9[127450]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:09 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:09 np0005549474 python3.9[127604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:09 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:09 np0005549474 python3.9[127682]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:09] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  7 04:49:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:09] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec  7 04:49:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 04:49:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2493 writes, 11K keys, 2493 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2493 writes, 2493 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2493 writes, 11K keys, 2493 commit groups, 1.0 writes per commit group, ingest: 22.81 MB, 0.04 MB/s#012Interval WAL: 2493 writes, 2493 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     41.7      0.45              0.04         4    0.114       0      0       0.0       0.0#012  L6      1/0   13.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.9     85.2     74.0      0.49              0.10         3    0.163     11K   1350       0.0       0.0#012 Sum      1/0   13.45 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   2.9     44.2     58.5      0.94              0.15         7    0.135     11K   1350       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   2.9     44.5     58.8      0.94              0.15         6    0.156     11K   1350       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     85.2     74.0      0.49              0.10         3    0.163     11K   1350       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.2      0.45              0.04         3    0.149       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.018, interval 0.018#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.09 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5637d9ea7350#2 capacity: 304.00 MB usage: 2.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(148,2.14 MB,0.705277%) FilterBlock(8,42.92 KB,0.0137881%) IndexBlock(8,84.58 KB,0.0271697%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 04:49:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094910 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:49:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:49:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:10 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:10 np0005549474 python3.9[127834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:11 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:11 np0005549474 python3.9[127913]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.44frbtqp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:11.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:11 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:12 np0005549474 python3.9[128066]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:49:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:49:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:49:12 np0005549474 python3.9[128144]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:12 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:13 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8280040f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:13 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:13 np0005549474 python3.9[128298]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:49:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:49:14 np0005549474 python3[128451]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  7 04:49:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:14 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:15 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:15 np0005549474 python3.9[128606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:15 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8480011c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:15 np0005549474 python3.9[128735]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:49:16 np0005549474 python3.9[128965]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:16 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:16 np0005549474 podman[129047]: 2025-12-07 09:49:16.753361357 +0000 UTC m=+0.040788311 container create 4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 04:49:16 np0005549474 systemd[1]: Started libpod-conmon-4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873.scope.
Dec  7 04:49:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:49:16 np0005549474 podman[129047]: 2025-12-07 09:49:16.732408357 +0000 UTC m=+0.019835341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:16 np0005549474 podman[129047]: 2025-12-07 09:49:16.943885829 +0000 UTC m=+0.231312873 container init 4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:49:16 np0005549474 podman[129047]: 2025-12-07 09:49:16.951156007 +0000 UTC m=+0.238583001 container start 4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_feynman, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:49:16 np0005549474 serene_feynman[129124]: 167 167
Dec  7 04:49:16 np0005549474 systemd[1]: libpod-4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873.scope: Deactivated successfully.
Dec  7 04:49:16 np0005549474 podman[129047]: 2025-12-07 09:49:16.957490079 +0000 UTC m=+0.244917053 container attach 4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:49:16 np0005549474 podman[129047]: 2025-12-07 09:49:16.957962042 +0000 UTC m=+0.245388986 container died 4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:49:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:49:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:16.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:49:16 np0005549474 python3.9[129129]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:17 np0005549474 systemd[1]: var-lib-containers-storage-overlay-28a33beef154a4f377041e96978ce5a03bcea9ddf6c15271dcc238b8b128a7be-merged.mount: Deactivated successfully.
Dec  7 04:49:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:17 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:17 np0005549474 podman[129047]: 2025-12-07 09:49:17.177163734 +0000 UTC m=+0.464590708 container remove 4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:49:17 np0005549474 systemd[1]: libpod-conmon-4afa495b58409eae5dd8a53890971a1dfbe53ffe1900d13f32e5272c2d291873.scope: Deactivated successfully.
Dec  7 04:49:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:49:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:49:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:49:17 np0005549474 podman[129178]: 2025-12-07 09:49:17.315972637 +0000 UTC m=+0.027695516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:17 np0005549474 podman[129178]: 2025-12-07 09:49:17.451020186 +0000 UTC m=+0.162743045 container create 49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:49:17 np0005549474 systemd[1]: Started libpod-conmon-49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00.scope.
Dec  7 04:49:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:17 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:49:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ded5cc564ee472a6f0b6b8c0083b77130687737c75d93d6691a87d514583bb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ded5cc564ee472a6f0b6b8c0083b77130687737c75d93d6691a87d514583bb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ded5cc564ee472a6f0b6b8c0083b77130687737c75d93d6691a87d514583bb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ded5cc564ee472a6f0b6b8c0083b77130687737c75d93d6691a87d514583bb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ded5cc564ee472a6f0b6b8c0083b77130687737c75d93d6691a87d514583bb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:17 np0005549474 podman[129178]: 2025-12-07 09:49:17.628735589 +0000 UTC m=+0.340458438 container init 49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:49:17 np0005549474 podman[129178]: 2025-12-07 09:49:17.637185799 +0000 UTC m=+0.348908628 container start 49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:49:17 np0005549474 awesome_haslett[129218]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:49:17 np0005549474 awesome_haslett[129218]: --> All data devices are unavailable
Dec  7 04:49:17 np0005549474 systemd[1]: libpod-49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00.scope: Deactivated successfully.
Dec  7 04:49:18 np0005549474 python3.9[129327]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:18 np0005549474 podman[129178]: 2025-12-07 09:49:18.046131452 +0000 UTC m=+0.757854341 container attach 49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 04:49:18 np0005549474 podman[129178]: 2025-12-07 09:49:18.047336454 +0000 UTC m=+0.759059303 container died 49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:49:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4ded5cc564ee472a6f0b6b8c0083b77130687737c75d93d6691a87d514583bb5-merged.mount: Deactivated successfully.
Dec  7 04:49:18 np0005549474 podman[129178]: 2025-12-07 09:49:18.137423338 +0000 UTC m=+0.849146167 container remove 49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:49:18 np0005549474 systemd[1]: libpod-conmon-49b056a92c7abe18ca49c236a20419c5f5231703d649525b91a5c26562f3ce00.scope: Deactivated successfully.
Dec  7 04:49:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:49:18 np0005549474 python3.9[129477]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:18 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:18 np0005549474 podman[129547]: 2025-12-07 09:49:18.620782909 +0000 UTC m=+0.021288522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:18 np0005549474 podman[129547]: 2025-12-07 09:49:18.793542025 +0000 UTC m=+0.194047618 container create f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:49:18 np0005549474 systemd[1]: Started libpod-conmon-f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736.scope.
Dec  7 04:49:18 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:49:18 np0005549474 podman[129547]: 2025-12-07 09:49:18.947185901 +0000 UTC m=+0.347691514 container init f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:49:18 np0005549474 podman[129547]: 2025-12-07 09:49:18.953791932 +0000 UTC m=+0.354297515 container start f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:49:18 np0005549474 podman[129547]: 2025-12-07 09:49:18.957095812 +0000 UTC m=+0.357601405 container attach f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:49:18 np0005549474 amazing_buck[129614]: 167 167
Dec  7 04:49:18 np0005549474 systemd[1]: libpod-f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736.scope: Deactivated successfully.
Dec  7 04:49:18 np0005549474 podman[129547]: 2025-12-07 09:49:18.959100537 +0000 UTC m=+0.359606130 container died f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 04:49:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-88880977bbfb81df9ef65c7546d95c803874e1e3ddb3d01b9e9ea91f99e82b36-merged.mount: Deactivated successfully.
Dec  7 04:49:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:18 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:49:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:19 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:19 np0005549474 python3.9[129708]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:19.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:19 np0005549474 podman[129547]: 2025-12-07 09:49:19.320930115 +0000 UTC m=+0.721435718 container remove f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:49:19 np0005549474 systemd[1]: libpod-conmon-f7a175ee36050f001807602da13f9e8008a0b0370929d9474522f29281a97736.scope: Deactivated successfully.
Dec  7 04:49:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:19 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:19 np0005549474 podman[129767]: 2025-12-07 09:49:19.537342981 +0000 UTC m=+0.108909378 container create 10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:49:19 np0005549474 podman[129767]: 2025-12-07 09:49:19.458256377 +0000 UTC m=+0.029822804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:19 np0005549474 systemd[1]: Started libpod-conmon-10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245.scope.
Dec  7 04:49:19 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:49:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bac8aff9e9278f88c4a6dd3485792694d2bb21f5911dd2679b018ef4f1c541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bac8aff9e9278f88c4a6dd3485792694d2bb21f5911dd2679b018ef4f1c541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bac8aff9e9278f88c4a6dd3485792694d2bb21f5911dd2679b018ef4f1c541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bac8aff9e9278f88c4a6dd3485792694d2bb21f5911dd2679b018ef4f1c541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:19 np0005549474 python3.9[129810]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:19 np0005549474 podman[129767]: 2025-12-07 09:49:19.806340461 +0000 UTC m=+0.377906918 container init 10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_heyrovsky, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:49:19 np0005549474 podman[129767]: 2025-12-07 09:49:19.813539947 +0000 UTC m=+0.385106354 container start 10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_heyrovsky, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:49:19 np0005549474 podman[129767]: 2025-12-07 09:49:19.817869705 +0000 UTC m=+0.389436102 container attach 10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:49:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:19] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:49:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:19] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]: {
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:    "0": [
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:        {
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "devices": [
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "/dev/loop3"
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            ],
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "lv_name": "ceph_lv0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "lv_size": "21470642176",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "name": "ceph_lv0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "tags": {
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.cluster_name": "ceph",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.crush_device_class": "",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.encrypted": "0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.osd_id": "0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.type": "block",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.vdo": "0",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:                "ceph.with_tpm": "0"
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            },
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "type": "block",
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:            "vg_name": "ceph_vg0"
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:        }
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]:    ]
Dec  7 04:49:20 np0005549474 zealous_heyrovsky[129813]: }
Dec  7 04:49:20 np0005549474 systemd[1]: libpod-10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245.scope: Deactivated successfully.
Dec  7 04:49:20 np0005549474 podman[129767]: 2025-12-07 09:49:20.131625114 +0000 UTC m=+0.703191521 container died 10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_heyrovsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:49:20 np0005549474 systemd[1]: var-lib-containers-storage-overlay-86bac8aff9e9278f88c4a6dd3485792694d2bb21f5911dd2679b018ef4f1c541-merged.mount: Deactivated successfully.
Dec  7 04:49:20 np0005549474 podman[129767]: 2025-12-07 09:49:20.28415109 +0000 UTC m=+0.855717497 container remove 10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:49:20 np0005549474 systemd[1]: libpod-conmon-10a330c7ddc51692861a3ed132847da1648a7bd5408fc1d49ff8316a8de3e245.scope: Deactivated successfully.
Dec  7 04:49:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:49:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:20 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:20 np0005549474 python3.9[130033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:20 np0005549474 podman[130074]: 2025-12-07 09:49:20.78187512 +0000 UTC m=+0.021122366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:21 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:21 np0005549474 python3.9[130166]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:21.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:21 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:21 np0005549474 podman[130074]: 2025-12-07 09:49:21.539259267 +0000 UTC m=+0.778506493 container create 7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 04:49:21 np0005549474 systemd[1]: Started libpod-conmon-7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a.scope.
Dec  7 04:49:21 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:49:21 np0005549474 podman[130074]: 2025-12-07 09:49:21.634534153 +0000 UTC m=+0.873781399 container init 7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:49:21 np0005549474 podman[130074]: 2025-12-07 09:49:21.64434574 +0000 UTC m=+0.883592976 container start 7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:49:21 np0005549474 podman[130074]: 2025-12-07 09:49:21.649600203 +0000 UTC m=+0.888847469 container attach 7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:49:21 np0005549474 lucid_lehmann[130215]: 167 167
Dec  7 04:49:21 np0005549474 systemd[1]: libpod-7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a.scope: Deactivated successfully.
Dec  7 04:49:21 np0005549474 podman[130074]: 2025-12-07 09:49:21.651047613 +0000 UTC m=+0.890294839 container died 7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:49:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d329ef511e51752fb6ce244fc824bdd6f80f349c323673ee72d5d76e119f0460-merged.mount: Deactivated successfully.
Dec  7 04:49:21 np0005549474 podman[130074]: 2025-12-07 09:49:21.708473287 +0000 UTC m=+0.947720503 container remove 7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 04:49:21 np0005549474 systemd[1]: libpod-conmon-7dfb6edda345fbb49430692e41d6f34b197701813c520169c687ccc059330a8a.scope: Deactivated successfully.
Dec  7 04:49:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:21 np0005549474 podman[130313]: 2025-12-07 09:49:21.861746713 +0000 UTC m=+0.054389603 container create 8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:49:21 np0005549474 systemd[1]: Started libpod-conmon-8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125.scope.
Dec  7 04:49:21 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:49:21 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d9c2796ddd1f2b524b40ab85b5256df46547c15d6f8bc8dd3fd3c11244039a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:21 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d9c2796ddd1f2b524b40ab85b5256df46547c15d6f8bc8dd3fd3c11244039a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:21 np0005549474 podman[130313]: 2025-12-07 09:49:21.827642024 +0000 UTC m=+0.020284944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:21 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d9c2796ddd1f2b524b40ab85b5256df46547c15d6f8bc8dd3fd3c11244039a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:21 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d9c2796ddd1f2b524b40ab85b5256df46547c15d6f8bc8dd3fd3c11244039a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:21 np0005549474 podman[130313]: 2025-12-07 09:49:21.942705359 +0000 UTC m=+0.135348269 container init 8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 04:49:21 np0005549474 podman[130313]: 2025-12-07 09:49:21.952389123 +0000 UTC m=+0.145032003 container start 8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:49:21 np0005549474 podman[130313]: 2025-12-07 09:49:21.955436156 +0000 UTC m=+0.148079046 container attach 8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bartik, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 04:49:22 np0005549474 python3.9[130360]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:49:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:22 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:49:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:22 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:49:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:49:22 np0005549474 lvm[130515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:49:22 np0005549474 lvm[130515]: VG ceph_vg0 finished
Dec  7 04:49:22 np0005549474 gallant_bartik[130361]: {}
Dec  7 04:49:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:22 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:22 np0005549474 systemd[1]: libpod-8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125.scope: Deactivated successfully.
Dec  7 04:49:22 np0005549474 systemd[1]: libpod-8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125.scope: Consumed 1.204s CPU time.
Dec  7 04:49:22 np0005549474 podman[130313]: 2025-12-07 09:49:22.674305353 +0000 UTC m=+0.866948243 container died 8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:49:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-18d9c2796ddd1f2b524b40ab85b5256df46547c15d6f8bc8dd3fd3c11244039a-merged.mount: Deactivated successfully.
Dec  7 04:49:22 np0005549474 python3.9[130608]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:23 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:49:23 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:49:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:23 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:23 np0005549474 podman[130313]: 2025-12-07 09:49:23.250317397 +0000 UTC m=+1.442960287 container remove 8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:49:23 np0005549474 systemd[1]: libpod-conmon-8a11ec1dbf0bf28cae13da2c95cecb7f8ca367b9c9335d9a65d95e78e267b125.scope: Deactivated successfully.
Dec  7 04:49:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:49:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:49:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:49:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:49:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:23 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848002920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:23 np0005549474 python3.9[130788]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:23 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:23 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:49:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:49:24 np0005549474 python3.9[130940]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:24 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:25 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:25.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:25 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:49:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:25 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:25 np0005549474 python3.9[131095]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  7 04:49:26 np0005549474 python3.9[131247]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  7 04:49:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:49:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:26 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848002920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:26 np0005549474 systemd[1]: session-44.scope: Deactivated successfully.
Dec  7 04:49:26 np0005549474 systemd[1]: session-44.scope: Consumed 27.757s CPU time.
Dec  7 04:49:26 np0005549474 systemd-logind[796]: Session 44 logged out. Waiting for processes to exit.
Dec  7 04:49:26 np0005549474 systemd-logind[796]: Removed session 44.
Dec  7 04:49:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:49:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:26.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:49:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:27 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:27.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:49:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:49:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:27 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:27.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:49:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:28 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:29 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:29.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:29 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:29] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:49:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:29] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:49:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:49:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:30 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:31 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:31.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:31 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:32 np0005549474 systemd-logind[796]: New session 45 of user zuul.
Dec  7 04:49:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094932 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:49:32 np0005549474 systemd[1]: Started Session 45 of User zuul.
Dec  7 04:49:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 04:49:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:32 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:32 np0005549474 python3.9[131433]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  7 04:49:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:33 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:33.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:33 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:49:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:49:33 np0005549474 python3.9[131587]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:49:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 04:49:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:34 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:34 np0005549474 python3.9[131741]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  7 04:49:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:35 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:35.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:35 np0005549474 python3.9[131895]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.9mxv94eg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:49:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:35 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:36 np0005549474 python3.9[132020]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.9mxv94eg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100974.8969018-102-146268065163059/.source.9mxv94eg _original_basename=.if3spkv3 follow=False checksum=d33b7f67f47d03d6f4e754679ee7a83508aaa6d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:49:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:36 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:36.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:49:37 np0005549474 python3.9[132178]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:49:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:37 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:37 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:49:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:49:37 np0005549474 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  7 04:49:38 np0005549474 python3.9[132353]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDztIgdvWfbGcTBsnJ/M+7HPF8fmQq/y+Bl35+zFajL3KlZAwT5Jrd0wBJFCENJp3TXe2vCz5X1q7WE7KkTCmfFoRuHmoqlZhTqT9s/+r8kiDatZiqCOWaKW4t/5FdXKBIVPlkry4+jUtXum7Hjaqx3CWAN9zTBaMGorSAA8LKMMvZPP0EYbAxaLgivTJ1mbZF0/ZNGo/5WQc2vAa9bAToTb0YwrajhjGwm8gpS1t7deqebzgprT7jWeXpxQZEVS/ynyQFICZ5W6covXVgsWgQNtfbmweGFQOMlP0vZE1/P3GUjWJgmaVsDrNDWdjCgiaRAZnNCC01eZyUjas+eot7B1Sg0BLS3JeORj3tIRcVI9DkuMQCdex5q/BCiz8YueUZn4qIiyvmG1max5Xui0X1LygXyNdyBWs5DbBGfPsFBLyXT1noEfYsgk5v0iu8DLl+PShKLO8xLqJMeYVYsUY8uG6qv+lA0YbVeiMomYLVXMABowwzcwzKHnlj5f+keT0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAU0KXuEPsaXKf0jGICVhewmjwEgAqPrkc4waZyQc7o#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBUF894VPJUzj6uHFODSSpNciOlDtn3PuhA44yhVzfkk/lOehkynDHVgBX6zwUYnOmiLJE7vHinKqWzoAVHhOas=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVimUYmVq1jwN4I5i4nI9XPpovC84bLnjioQY6MxnDdHWaEfuEub8qpNrfkTCFppybs82dXQEl9witk6tAj8GQQGfFN/IfI+GFHby5G2bWpOumixFRFVkhc3QW9inlnJNA0TMzwlbz5LOkL9/ShhCpshMnBGNjKJFaH5GvlqpWCYYAotq1zbwd6SRIu4O5cPa3+7mFmXKtlFl28oAFp3NMsNJ9wbIWhXeOcfUSNbrL52O30C6TKW8HiBC2kfg578bm0Pa6r2iMvPHhW7kMm5eQwUfB5l5JKgIsDJmaKjLej/4U7hO52yut7hfnV3O8qK0ZpD2xEwhe9OneH4tKueT63SehDENUIJWAasPiPrlHWkfm6PWhKwPMBu3Vuir/4R1SA6ZIJEzQeGq/nUuSBtbDZC4jDuXb8oywpR/uCaBgZbziPhqBMIegQDMvKeQGQmZn6V+eKkfv3I9Z83LbQRXEnIWiuf4XRp1btGZYv0+Q7zgiD+dw9QxCgWkdWxA9SoM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEWDyTOT2SMCqj8YwhAvKshXrBfGOObG4cDM9r5B2FZj#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJuP9cUBko1m6+714/2inXnWXQqIN7Sx7/A0GBQAjM8bAkICVNXZtk9Pu38lY43gxHx3nZ57o3Dpp2ak8tsjrR4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCip7MnZvuJx8DmLIGnIc8NcND4H8xH1hog1PQWG+WFQHEqpA3BpOSGhk8Mr5skxXappecIladNg73ReINM2gE58XsvsHhQICeXuRBK091YtVSafixD3fEvhD+xGUIukp3F6EPKU0x4WQ0xWQC38o13OyZtGRApI6AQEAxg0QMsB7qwwroH6ag7l7U4sv5nYqK3upInbblwL0LYfo6jyhHnhwZBVjv2MTJ8zZktF54SlM68fh8WQwQbA7VMqK6wEJlDRkdsIXPbq2PN6V08KJlBkBlvgXu5aTIeGQ5DdFuKQutnMEWlwiCtoJNly6Pv7PwjZnDKkPQP5RamELk/eKCRHXY5SbfmyG9VtAHHEV2f9NsjnFZRBx9ikx/H6/NpPmlMji5VbyfY1b0u0DreNZqm2bDWRcL++rsjZDfWqh2cJOF4Jan0m12bfjWDBXeGiunpl4XWydA0nbi0v4RHvH6pD2BoTuxC2rVSR233WC88Xe5HU1WoXegIy43ksMeFvGs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYukxfCIA1Xurqi7GbVHfVTkzw++ujxQPgfwUA9AznN#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOT58aEV4d46XVVznwJYUJL8kuqtWeT85ng6XRArVPbONJirV0BPyfS1SwB7SxPwywavSEowgTdPM8QvrYiA0kE=#012 create=True mode=0644 path=/tmp/ansible.9mxv94eg state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:49:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:38 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:39 np0005549474 python3.9[132506]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.9mxv94eg' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:49:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd848004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:39.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:39 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:49:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:49:39 np0005549474 python3.9[132661]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.9mxv94eg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:39] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:49:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:39] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:49:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094940 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:49:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:49:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:40 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:40 np0005549474 systemd[1]: session-45.scope: Deactivated successfully.
Dec  7 04:49:40 np0005549474 systemd[1]: session-45.scope: Consumed 4.635s CPU time.
Dec  7 04:49:40 np0005549474 systemd-logind[796]: Session 45 logged out. Waiting for processes to exit.
Dec  7 04:49:40 np0005549474 systemd-logind[796]: Removed session 45.
Dec  7 04:49:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:41 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:41.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:41 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:49:42
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'backups', 'volumes', '.nfs', 'cephfs.cephfs.data', 'images', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta']
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:49:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:49:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:49:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:42 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd81c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:49:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:49:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:43 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd824002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:43 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:49:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:43.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:49:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[118014]: 07/12/2025 09:49:44 : epoch 69354d3b : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd84c0045e0 fd 38 proxy ignored for local
Dec  7 04:49:44 np0005549474 kernel: ganesha.nfsd[119941]: segfault at 50 ip 00007fd8fdb2032e sp 00007fd8ccff8210 error 4 in libntirpc.so.5.8[7fd8fdb05000+2c000] likely on CPU 4 (core 0, socket 4)
Dec  7 04:49:44 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:49:44 np0005549474 systemd[1]: Started Process Core Dump (PID 132692/UID 0).
Dec  7 04:49:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:45 np0005549474 systemd-coredump[132693]: Process 118047 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 42:#012#0  0x00007fd8fdb2032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:49:45 np0005549474 systemd[1]: systemd-coredump@2-132692-0.service: Deactivated successfully.
Dec  7 04:49:45 np0005549474 systemd[1]: systemd-coredump@2-132692-0.service: Consumed 1.157s CPU time.
Dec  7 04:49:45 np0005549474 podman[132700]: 2025-12-07 09:49:45.965231746 +0000 UTC m=+0.034636205 container died 7a7c700412f9120e1d9a345cfd362d0242d02fda114be02a9b7c3782deef35b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 04:49:45 np0005549474 systemd[1]: var-lib-containers-storage-overlay-10e3891aa34f62b012f42f15ff3af48657b0af02694cd4922a6f5a17d6a4d406-merged.mount: Deactivated successfully.
Dec  7 04:49:46 np0005549474 podman[132700]: 2025-12-07 09:49:46.019142715 +0000 UTC m=+0.088547094 container remove 7a7c700412f9120e1d9a345cfd362d0242d02fda114be02a9b7c3782deef35b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:49:46 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:49:46 np0005549474 systemd-logind[796]: New session 46 of user zuul.
Dec  7 04:49:46 np0005549474 systemd[1]: Started Session 46 of User zuul.
Dec  7 04:49:46 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:49:46 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.585s CPU time.
Dec  7 04:49:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:49:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:46.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.120549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987120616, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1655, "num_deletes": 250, "total_data_size": 3219348, "memory_usage": 3266760, "flush_reason": "Manual Compaction"}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987140373, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1961278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10911, "largest_seqno": 12565, "table_properties": {"data_size": 1955355, "index_size": 2998, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14774, "raw_average_key_size": 20, "raw_value_size": 1942546, "raw_average_value_size": 2686, "num_data_blocks": 132, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100806, "oldest_key_time": 1765100806, "file_creation_time": 1765100987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19852 microseconds, and 5404 cpu microseconds.
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.140418) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1961278 bytes OK
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.140443) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142298) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142311) EVENT_LOG_v1 {"time_micros": 1765100987142308, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142327) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3212289, prev total WAL file size 3212289, number of live WAL files 2.
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1915KB)], [26(13MB)]
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987142973, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16069848, "oldest_snapshot_seqno": -1}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4415 keys, 14090510 bytes, temperature: kUnknown
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987267368, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14090510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14056890, "index_size": 21478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111894, "raw_average_key_size": 25, "raw_value_size": 13972187, "raw_average_value_size": 3164, "num_data_blocks": 920, "num_entries": 4415, "num_filter_entries": 4415, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765100987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.267619) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14090510 bytes
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.268631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.5 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(15.4) write-amplify(7.2) OK, records in: 4861, records dropped: 446 output_compression: NoCompression
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.268654) EVENT_LOG_v1 {"time_micros": 1765100987268643, "job": 10, "event": "compaction_finished", "compaction_time_micros": 124481, "compaction_time_cpu_micros": 35620, "output_level": 6, "num_output_files": 1, "total_output_size": 14090510, "num_input_records": 4861, "num_output_records": 4415, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987269177, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987272449, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.272487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.272493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.272495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.272497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:49:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:49:47.272499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:49:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:47 np0005549474 python3.9[132898]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:49:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:49:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:47.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:49:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:49:48 np0005549474 python3.9[133054]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  7 04:49:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:49.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:49] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:49:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:49] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:49:50 np0005549474 python3.9[133210]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:49:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  7 04:49:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:49:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/094950 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:49:50 np0005549474 python3.9[133364]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:49:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:51.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:51 np0005549474 python3.9[133518]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:49:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:49:52 np0005549474 python3.9[133670]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:49:53 np0005549474 systemd[1]: session-46.scope: Deactivated successfully.
Dec  7 04:49:53 np0005549474 systemd[1]: session-46.scope: Consumed 3.757s CPU time.
Dec  7 04:49:53 np0005549474 systemd-logind[796]: Session 46 logged out. Waiting for processes to exit.
Dec  7 04:49:53 np0005549474 systemd-logind[796]: Removed session 46.
Dec  7 04:49:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:53.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:53.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:49:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:55.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:55.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:56 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 3.
Dec  7 04:49:56 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:49:56 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.585s CPU time.
Dec  7 04:49:56 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:49:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:49:56 np0005549474 podman[133750]: 2025-12-07 09:49:56.636729947 +0000 UTC m=+0.038069348 container create ec6f1bcb3e88969db5073af3d931110f3d7ab4826ee44ff0ba85a053ab4559f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:49:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3c03d8fbece64e405df1fff6cd1cda0975b603dafaeebd1a4eb5516b37a110/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3c03d8fbece64e405df1fff6cd1cda0975b603dafaeebd1a4eb5516b37a110/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3c03d8fbece64e405df1fff6cd1cda0975b603dafaeebd1a4eb5516b37a110/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3c03d8fbece64e405df1fff6cd1cda0975b603dafaeebd1a4eb5516b37a110/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:49:56 np0005549474 podman[133750]: 2025-12-07 09:49:56.693929886 +0000 UTC m=+0.095269317 container init ec6f1bcb3e88969db5073af3d931110f3d7ab4826ee44ff0ba85a053ab4559f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:49:56 np0005549474 podman[133750]: 2025-12-07 09:49:56.700165186 +0000 UTC m=+0.101504587 container start ec6f1bcb3e88969db5073af3d931110f3d7ab4826ee44ff0ba85a053ab4559f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:49:56 np0005549474 bash[133750]: ec6f1bcb3e88969db5073af3d931110f3d7ab4826ee44ff0ba85a053ab4559f3
Dec  7 04:49:56 np0005549474 podman[133750]: 2025-12-07 09:49:56.619534429 +0000 UTC m=+0.020873860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:49:56 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:49:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:49:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:49:56.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:49:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:49:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:57.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:49:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:49:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:49:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:57.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:49:58 np0005549474 systemd-logind[796]: New session 47 of user zuul.
Dec  7 04:49:58 np0005549474 systemd[1]: Started Session 47 of User zuul.
Dec  7 04:49:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:49:59 np0005549474 python3.9[133988]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:49:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:59.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:49:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:49:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:59.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:49:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:59] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  7 04:49:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:49:59] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  7 04:50:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  7 04:50:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 04:50:00 np0005549474 python3.9[134145]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:50:00 np0005549474 ceph-mon[74516]: overall HEALTH_OK
Dec  7 04:50:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:01.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:01 np0005549474 python3.9[134230]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  7 04:50:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:01.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:50:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:50:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:03.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:03 np0005549474 python3.9[134384]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:50:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec  7 04:50:04 np0005549474 python3.9[134535]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 04:50:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:05.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:05 np0005549474 python3.9[134687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:50:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095006 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:50:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  7 04:50:06 np0005549474 python3.9[134837]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:50:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:06.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:50:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:07 np0005549474 systemd[1]: session-47.scope: Deactivated successfully.
Dec  7 04:50:07 np0005549474 systemd[1]: session-47.scope: Consumed 5.734s CPU time.
Dec  7 04:50:07 np0005549474 systemd-logind[796]: Session 47 logged out. Waiting for processes to exit.
Dec  7 04:50:07 np0005549474 systemd-logind[796]: Removed session 47.
Dec  7 04:50:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:07.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:07.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000009:nfs.cephfs.2: -2
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:50:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:50:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:09 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:09.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:09 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd440016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:09.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:09] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  7 04:50:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:09] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec  7 04:50:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:50:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:10 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:11 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:11.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:11 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:11.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:12 np0005549474 systemd-logind[796]: New session 48 of user zuul.
Dec  7 04:50:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:50:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:50:12 np0005549474 systemd[1]: Started Session 48 of User zuul.
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:50:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:50:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095012 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:50:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:12 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:13 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:13.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:13 np0005549474 python3.9[135038]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:50:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:13 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd38001140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:13.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:50:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:14 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:15 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:15 np0005549474 python3.9[135195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:15.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:15 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:15.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:16 np0005549474 python3.9[135348]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  7 04:50:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:16 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd38001c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:16.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:50:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:17 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:17.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:17 np0005549474 python3.9[135500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:17 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:17.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:18 np0005549474 python3.9[135650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101016.2225275-154-122230533556165/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=340338937fc46537a676ebda28d840cda013be9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  7 04:50:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:18 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:18 np0005549474 python3.9[135802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:19 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd38001c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:19.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:19 np0005549474 python3.9[135927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101018.4927936-154-201739034029541/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=34e398db9ed1f36bf489ed136dc8e2f62ebc4eab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:19 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:19.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:19] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:50:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:19] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:50:20 np0005549474 python3.9[136079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Dec  7 04:50:20 np0005549474 python3.9[136202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101019.6265287-154-49905103895298/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=559e34b4887933136aef35a2ee670f186282ffa2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:20 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:21 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:21.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:21 np0005549474 python3.9[136356]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:21 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd38001c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:21.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:21 np0005549474 python3.9[136508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:50:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:22 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:22 np0005549474 python3.9[136660]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:23 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:23.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:23 np0005549474 python3.9[136785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101022.1859112-330-28729348447893/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7c828e0a11b53abd0c3632817b6b751712e2a5b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:23 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:23.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:23 np0005549474 python3.9[136987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:50:24 np0005549474 python3.9[137180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101023.5737867-330-43368693685356/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7f7d1cd622d2240bbe15befe04459424cf20396a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:24 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:50:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:50:25 np0005549474 python3.9[137386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:25 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:25.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.359154009 +0000 UTC m=+0.036760322 container create f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:50:25 np0005549474 systemd[1]: Started libpod-conmon-f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5.scope.
Dec  7 04:50:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.432400595 +0000 UTC m=+0.110006928 container init f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.343361709 +0000 UTC m=+0.020968042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.44082567 +0000 UTC m=+0.118431983 container start f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.4441855 +0000 UTC m=+0.121791833 container attach f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:50:25 np0005549474 wizardly_pike[137591]: 167 167
Dec  7 04:50:25 np0005549474 systemd[1]: libpod-f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5.scope: Deactivated successfully.
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.446577843 +0000 UTC m=+0.124184176 container died f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_pike, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 04:50:25 np0005549474 systemd[1]: var-lib-containers-storage-overlay-bba5452583acd6a12fb7335add9b398681240ba49e80bd2dab7a6f570842d9aa-merged.mount: Deactivated successfully.
Dec  7 04:50:25 np0005549474 podman[137533]: 2025-12-07 09:50:25.491982476 +0000 UTC m=+0.169588789 container remove f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:50:25 np0005549474 systemd[1]: libpod-conmon-f3e2b906f830c9f7cc3f8c531207c8b2939f42b5b1e1e3f75162ffbbcc076ef5.scope: Deactivated successfully.
Dec  7 04:50:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:25 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:25 np0005549474 python3.9[137598]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101024.6411405-330-225449787824899/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2c9d0680f16b5c0785cc41f4146617e803ee03d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:50:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:25.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:50:25 np0005549474 podman[137620]: 2025-12-07 09:50:25.648759571 +0000 UTC m=+0.041241972 container create 52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_bardeen, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:50:25 np0005549474 systemd[1]: Started libpod-conmon-52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c.scope.
Dec  7 04:50:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:50:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b86b02b2fbfe16ff61c9c276e24fbcf73b2a01ce75d839d1c6ef982d125470a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b86b02b2fbfe16ff61c9c276e24fbcf73b2a01ce75d839d1c6ef982d125470a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b86b02b2fbfe16ff61c9c276e24fbcf73b2a01ce75d839d1c6ef982d125470a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b86b02b2fbfe16ff61c9c276e24fbcf73b2a01ce75d839d1c6ef982d125470a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b86b02b2fbfe16ff61c9c276e24fbcf73b2a01ce75d839d1c6ef982d125470a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:25 np0005549474 podman[137620]: 2025-12-07 09:50:25.63074993 +0000 UTC m=+0.023232351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:50:25 np0005549474 podman[137620]: 2025-12-07 09:50:25.735001563 +0000 UTC m=+0.127483994 container init 52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_bardeen, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:50:25 np0005549474 podman[137620]: 2025-12-07 09:50:25.742485833 +0000 UTC m=+0.134968254 container start 52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_bardeen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:50:25 np0005549474 podman[137620]: 2025-12-07 09:50:25.745930435 +0000 UTC m=+0.138412836 container attach 52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec  7 04:50:26 np0005549474 vigilant_bardeen[137660]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:50:26 np0005549474 vigilant_bardeen[137660]: --> All data devices are unavailable
Dec  7 04:50:26 np0005549474 systemd[1]: libpod-52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c.scope: Deactivated successfully.
Dec  7 04:50:26 np0005549474 podman[137620]: 2025-12-07 09:50:26.115516701 +0000 UTC m=+0.507999122 container died 52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:50:26 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4b86b02b2fbfe16ff61c9c276e24fbcf73b2a01ce75d839d1c6ef982d125470a-merged.mount: Deactivated successfully.
Dec  7 04:50:26 np0005549474 podman[137620]: 2025-12-07 09:50:26.157183114 +0000 UTC m=+0.549665515 container remove 52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_bardeen, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 04:50:26 np0005549474 systemd[1]: libpod-conmon-52b128ac3e5764150fbdf484fe4509ba4baf8c5a5d029168dada9bf75776dc2c.scope: Deactivated successfully.
Dec  7 04:50:26 np0005549474 python3.9[137802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.659625287 +0000 UTC m=+0.038476458 container create 0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_pare, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:50:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:26 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:26 np0005549474 systemd[1]: Started libpod-conmon-0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc.scope.
Dec  7 04:50:26 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.643859697 +0000 UTC m=+0.022710888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.74665642 +0000 UTC m=+0.125507611 container init 0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.754613673 +0000 UTC m=+0.133464844 container start 0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.757271214 +0000 UTC m=+0.136122385 container attach 0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 04:50:26 np0005549474 magical_pare[138070]: 167 167
Dec  7 04:50:26 np0005549474 systemd[1]: libpod-0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc.scope: Deactivated successfully.
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.75859573 +0000 UTC m=+0.137446941 container died 0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:50:26 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f188c612d84ed2000cdadf37fdfc260e4d5235c813555603f74da9f24466cac6-merged.mount: Deactivated successfully.
Dec  7 04:50:26 np0005549474 podman[138003]: 2025-12-07 09:50:26.79832135 +0000 UTC m=+0.177172521 container remove 0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_pare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:50:26 np0005549474 systemd[1]: libpod-conmon-0791533dbf19efccbea476fe5eb010179cccd622247e6bc62f7c5c5f9b44dbfc.scope: Deactivated successfully.
Dec  7 04:50:26 np0005549474 python3.9[138075]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:26 np0005549474 podman[138098]: 2025-12-07 09:50:26.968800161 +0000 UTC m=+0.042309050 container create f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:50:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:26.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:50:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:26.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:50:27 np0005549474 podman[138098]: 2025-12-07 09:50:26.949630499 +0000 UTC m=+0.023139438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:50:27 np0005549474 systemd[1]: Started libpod-conmon-f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181.scope.
Dec  7 04:50:27 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:50:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f662f17e4c83766a2141b260ab0a5d0e25b159d3f30665e13a14780902b3bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f662f17e4c83766a2141b260ab0a5d0e25b159d3f30665e13a14780902b3bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f662f17e4c83766a2141b260ab0a5d0e25b159d3f30665e13a14780902b3bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f662f17e4c83766a2141b260ab0a5d0e25b159d3f30665e13a14780902b3bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:27 np0005549474 podman[138098]: 2025-12-07 09:50:27.082593038 +0000 UTC m=+0.156101937 container init f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:50:27 np0005549474 podman[138098]: 2025-12-07 09:50:27.092339489 +0000 UTC m=+0.165848378 container start f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:50:27 np0005549474 podman[138098]: 2025-12-07 09:50:27.095733599 +0000 UTC m=+0.169242498 container attach f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 04:50:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:27 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:27.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]: {
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:    "0": [
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:        {
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "devices": [
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "/dev/loop3"
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            ],
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "lv_name": "ceph_lv0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "lv_size": "21470642176",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "name": "ceph_lv0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "tags": {
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.cluster_name": "ceph",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.crush_device_class": "",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.encrypted": "0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.osd_id": "0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.type": "block",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.vdo": "0",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:                "ceph.with_tpm": "0"
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            },
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "type": "block",
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:            "vg_name": "ceph_vg0"
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:        }
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]:    ]
Dec  7 04:50:27 np0005549474 romantic_blackwell[138138]: }
Dec  7 04:50:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:50:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:50:27 np0005549474 systemd[1]: libpod-f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181.scope: Deactivated successfully.
Dec  7 04:50:27 np0005549474 podman[138098]: 2025-12-07 09:50:27.414965622 +0000 UTC m=+0.488474541 container died f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:50:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:27 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4f662f17e4c83766a2141b260ab0a5d0e25b159d3f30665e13a14780902b3bcd-merged.mount: Deactivated successfully.
Dec  7 04:50:27 np0005549474 podman[138098]: 2025-12-07 09:50:27.454511677 +0000 UTC m=+0.528020566 container remove f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 04:50:27 np0005549474 systemd[1]: libpod-conmon-f8e7300e613399171cc93f2ce38444a406752ef5a5cad290787298f1cb655181.scope: Deactivated successfully.
Dec  7 04:50:27 np0005549474 python3.9[138273]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:27 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:27.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:28.001781457 +0000 UTC m=+0.061529843 container create d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:50:28 np0005549474 systemd[1]: Started libpod-conmon-d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5.scope.
Dec  7 04:50:28 np0005549474 python3.9[138487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101027.0946946-492-273651230293578/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=35884589adcfe8db7f1ee3795d0e9c52136b4ce5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:27.972430254 +0000 UTC m=+0.032178670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:50:28 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:28.089599072 +0000 UTC m=+0.149347478 container init d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:28.097524724 +0000 UTC m=+0.157273110 container start d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:28.100809471 +0000 UTC m=+0.160557867 container attach d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  7 04:50:28 np0005549474 gallant_stonebraker[138518]: 167 167
Dec  7 04:50:28 np0005549474 systemd[1]: libpod-d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5.scope: Deactivated successfully.
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:28.103444162 +0000 UTC m=+0.163192548 container died d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:50:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095028 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:50:28 np0005549474 systemd[1]: var-lib-containers-storage-overlay-dfbc8eca6921c3fa7551351278dd0768ccf703869bb5f5b6f7c8325f33c660b5-merged.mount: Deactivated successfully.
Dec  7 04:50:28 np0005549474 podman[138502]: 2025-12-07 09:50:28.151505245 +0000 UTC m=+0.211253631 container remove d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:50:28 np0005549474 systemd[1]: libpod-conmon-d362805da2f7b171df55859dd11bc155f8e479a4f6adb87b2466d821b666aac5.scope: Deactivated successfully.
Dec  7 04:50:28 np0005549474 podman[138606]: 2025-12-07 09:50:28.29180238 +0000 UTC m=+0.037210164 container create 12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_rhodes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 04:50:28 np0005549474 systemd[1]: Started libpod-conmon-12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d.scope.
Dec  7 04:50:28 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:50:28 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a348b49e916b8d2847c7558c652d1c3a510d3264fe17946e2698cb2965abcc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:28 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a348b49e916b8d2847c7558c652d1c3a510d3264fe17946e2698cb2965abcc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:28 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a348b49e916b8d2847c7558c652d1c3a510d3264fe17946e2698cb2965abcc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:28 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a348b49e916b8d2847c7558c652d1c3a510d3264fe17946e2698cb2965abcc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:50:28 np0005549474 podman[138606]: 2025-12-07 09:50:28.366613227 +0000 UTC m=+0.112021031 container init 12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:50:28 np0005549474 podman[138606]: 2025-12-07 09:50:28.276336307 +0000 UTC m=+0.021744121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:50:28 np0005549474 podman[138606]: 2025-12-07 09:50:28.372869424 +0000 UTC m=+0.118277208 container start 12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:50:28 np0005549474 podman[138606]: 2025-12-07 09:50:28.377736954 +0000 UTC m=+0.123144738 container attach 12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_rhodes, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:50:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:50:28 np0005549474 python3.9[138715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:28 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:28 np0005549474 lvm[138881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:50:28 np0005549474 lvm[138881]: VG ceph_vg0 finished
Dec  7 04:50:29 np0005549474 distracted_rhodes[138658]: {}
Dec  7 04:50:29 np0005549474 systemd[1]: libpod-12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d.scope: Deactivated successfully.
Dec  7 04:50:29 np0005549474 podman[138606]: 2025-12-07 09:50:29.046560639 +0000 UTC m=+0.791968493 container died 12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 04:50:29 np0005549474 systemd[1]: libpod-12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d.scope: Consumed 1.040s CPU time.
Dec  7 04:50:29 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0a348b49e916b8d2847c7558c652d1c3a510d3264fe17946e2698cb2965abcc1-merged.mount: Deactivated successfully.
Dec  7 04:50:29 np0005549474 podman[138606]: 2025-12-07 09:50:29.094987712 +0000 UTC m=+0.840395516 container remove 12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:50:29 np0005549474 systemd[1]: libpod-conmon-12fecc8b0c0463c3ca9976e570eea0189c8589f88f50fb0d8aa33857a508a18d.scope: Deactivated successfully.
Dec  7 04:50:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:50:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:50:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:29 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:29 np0005549474 python3.9[138910]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101028.2171056-492-212782970989836/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7f7d1cd622d2240bbe15befe04459424cf20396a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:29.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:29 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:29.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:29 np0005549474 python3.9[139103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:29] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:50:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:29] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:50:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:50:30 np0005549474 python3.9[139226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101029.370609-492-44431264633512/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=06d9184c13de8850395b20f9446bfe3fae9f2d66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:50:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:30 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:31 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:31.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:31 np0005549474 python3.9[139380]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:31 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:31.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:31 np0005549474 python3.9[139532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:50:32 np0005549474 python3.9[139655]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101031.525192-668-101466734614916/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:32 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:33 np0005549474 python3.9[139808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:33 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:33.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:33 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:33.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:33 np0005549474 python3.9[139961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:34 np0005549474 python3.9[140084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101033.3169987-745-234903403041970/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:50:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:34 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:34 np0005549474 python3.9[140236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:35 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:35.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:35 np0005549474 python3.9[140390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:35 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:35.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:36 np0005549474 python3.9[140513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101035.0554862-807-55106953996831/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:50:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:36 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:36 np0005549474 python3.9[140665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:50:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:37 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:37 np0005549474 python3.9[140834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:50:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:37.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:50:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:37 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:50:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:37 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:37.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:37 np0005549474 python3.9[140967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101036.8720472-876-273887383593896/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:50:38 np0005549474 python3.9[141119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:38 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:39 np0005549474 python3.9[141272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:39 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd44002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:39 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:39 np0005549474 python3.9[141396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101038.666608-936-178471841035954/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:39] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:50:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:39] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:50:40 np0005549474 python3.9[141548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:50:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:40 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:50:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:40 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:50:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec  7 04:50:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:40 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd380041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:40 np0005549474 python3.9[141703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:41 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:41 np0005549474 python3.9[141827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101040.5487185-1005-268296968108804/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:41 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:41.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:50:42
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', '.nfs', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:50:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:50:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:50:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:50:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:50:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:42 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 04:50:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6888 writes, 29K keys, 6888 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6888 writes, 1196 syncs, 5.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6888 writes, 29K keys, 6888 commit groups, 1.0 writes per commit group, ingest: 20.43 MB, 0.03 MB/s#012Interval WAL: 6888 writes, 1196 syncs, 5.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  7 04:50:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:43 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50000f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:43.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:43 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.571868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043571904, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 764, "num_deletes": 251, "total_data_size": 1199726, "memory_usage": 1224688, "flush_reason": "Manual Compaction"}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043579034, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1169750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12567, "largest_seqno": 13329, "table_properties": {"data_size": 1165861, "index_size": 1669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8633, "raw_average_key_size": 19, "raw_value_size": 1158030, "raw_average_value_size": 2550, "num_data_blocks": 73, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100988, "oldest_key_time": 1765100988, "file_creation_time": 1765101043, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 7208 microseconds, and 3186 cpu microseconds.
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.579075) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1169750 bytes OK
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.579096) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.581126) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.581144) EVENT_LOG_v1 {"time_micros": 1765101043581139, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.581162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1195915, prev total WAL file size 1195915, number of live WAL files 2.
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.581717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1142KB)], [29(13MB)]
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043581781, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15260260, "oldest_snapshot_seqno": -1}
Dec  7 04:50:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:43 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  7 04:50:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:43.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4351 keys, 13470858 bytes, temperature: kUnknown
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043752862, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13470858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13438825, "index_size": 20083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 111409, "raw_average_key_size": 25, "raw_value_size": 13356397, "raw_average_value_size": 3069, "num_data_blocks": 849, "num_entries": 4351, "num_filter_entries": 4351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765101043, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.753084) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13470858 bytes
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.755347) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.2 rd, 78.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 13.4 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(24.6) write-amplify(11.5) OK, records in: 4869, records dropped: 518 output_compression: NoCompression
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.755363) EVENT_LOG_v1 {"time_micros": 1765101043755355, "job": 12, "event": "compaction_finished", "compaction_time_micros": 171153, "compaction_time_cpu_micros": 25474, "output_level": 6, "num_output_files": 1, "total_output_size": 13470858, "num_input_records": 4869, "num_output_records": 4351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043755678, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043758044, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.581659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.758116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.758121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.758122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.758124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:50:43 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:50:43.758125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:50:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:50:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:44 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:45 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:50:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:45.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:50:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:45 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:45.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:46 np0005549474 systemd[1]: session-48.scope: Deactivated successfully.
Dec  7 04:50:46 np0005549474 systemd[1]: session-48.scope: Consumed 21.385s CPU time.
Dec  7 04:50:46 np0005549474 systemd-logind[796]: Session 48 logged out. Waiting for processes to exit.
Dec  7 04:50:46 np0005549474 systemd-logind[796]: Removed session 48.
Dec  7 04:50:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 925 B/s wr, 3 op/s
Dec  7 04:50:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:46 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:50:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:50:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:46.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:50:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:47 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:47.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:47 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:47.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 925 B/s wr, 3 op/s
Dec  7 04:50:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:48 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:49 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:49.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:49 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:49.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:49] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  7 04:50:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:49] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  7 04:50:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095050 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:50:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 925 B/s wr, 3 op/s
Dec  7 04:50:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:50 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:51 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:51.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:51 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:51.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:51 np0005549474 systemd-logind[796]: New session 49 of user zuul.
Dec  7 04:50:51 np0005549474 systemd[1]: Started Session 49 of User zuul.
Dec  7 04:50:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 420 B/s wr, 1 op/s
Dec  7 04:50:52 np0005549474 python3.9[142017]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:52 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:53 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:53.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:53 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50002f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:53 np0005549474 python3.9[142171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:54 np0005549474 python3.9[142294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101053.0617688-62-20506733951945/.source.conf _original_basename=ceph.conf follow=False checksum=af72f8d2b9ff82597d6797e3be25005bcbb0448d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 420 B/s wr, 2 op/s
Dec  7 04:50:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:54 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:55 np0005549474 python3.9[142447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:50:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:55 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095055 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:50:55 np0005549474 python3.9[142571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101054.6112182-62-51134099907558/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=2eec074211d5644630d1561f0b2053eaf094bdc2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:50:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:55 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:55.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:56 np0005549474 systemd[1]: session-49.scope: Deactivated successfully.
Dec  7 04:50:56 np0005549474 systemd[1]: session-49.scope: Consumed 2.550s CPU time.
Dec  7 04:50:56 np0005549474 systemd-logind[796]: Session 49 logged out. Waiting for processes to exit.
Dec  7 04:50:56 np0005549474 systemd-logind[796]: Removed session 49.
Dec  7 04:50:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 168 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:50:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:56 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:50:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:50:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:50:56.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:50:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:57 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:50:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:50:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:50:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:57 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:50:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:57.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:50:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:50:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:58 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:59 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:50:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:59.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:50:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:50:59 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:50:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:50:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:50:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:59.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:50:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:59] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Dec  7 04:50:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:50:59] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Dec  7 04:51:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:51:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:00 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50002f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:01 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:01.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:01 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:01.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:02 np0005549474 systemd-logind[796]: New session 50 of user zuul.
Dec  7 04:51:02 np0005549474 systemd[1]: Started Session 50 of User zuul.
Dec  7 04:51:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:51:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:02 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:03 np0005549474 python3.9[142782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:51:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:03 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd50003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:51:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:03.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:51:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:03 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:51:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:51:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:04 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:51:04 np0005549474 python3.9[142939]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:51:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:51:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:04 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:04 np0005549474 python3.9[143091]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:51:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:05 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:05.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:05 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:05.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:05 np0005549474 python3.9[143244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:51:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:51:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:06 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd5c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:06 np0005549474 python3.9[143396]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  7 04:51:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:06.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:51:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:07 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:51:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:07 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:51:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:07 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd30003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:07 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:07.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:51:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:51:08 np0005549474 kernel: ganesha.nfsd[141655]: segfault at 50 ip 00007efe065ab32e sp 00007efdbb7fd210 error 4 in libntirpc.so.5.8[7efe06590000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  7 04:51:08 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:51:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[133765]: 07/12/2025 09:51:08 : epoch 69354dc4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efd2c0036e0 fd 39 proxy ignored for local
Dec  7 04:51:08 np0005549474 systemd[1]: Started Process Core Dump (PID 143400/UID 0).
Dec  7 04:51:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:09.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:51:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:51:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:09] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Dec  7 04:51:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:09] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Dec  7 04:51:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095110 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:51:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Dec  7 04:51:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:51:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:51:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:51:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:51:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Dec  7 04:51:12 np0005549474 systemd-coredump[143401]: Process 133769 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 56:#012#0  0x00007efe065ab32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:51:13 np0005549474 systemd[1]: systemd-coredump@3-143400-0.service: Deactivated successfully.
Dec  7 04:51:13 np0005549474 systemd[1]: systemd-coredump@3-143400-0.service: Consumed 1.072s CPU time.
Dec  7 04:51:13 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  7 04:51:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:13 np0005549474 podman[143416]: 2025-12-07 09:51:13.165729864 +0000 UTC m=+0.045016835 container died ec6f1bcb3e88969db5073af3d931110f3d7ab4826ee44ff0ba85a053ab4559f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:51:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:13.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8c3c03d8fbece64e405df1fff6cd1cda0975b603dafaeebd1a4eb5516b37a110-merged.mount: Deactivated successfully.
Dec  7 04:51:14 np0005549474 podman[143416]: 2025-12-07 09:51:14.499120572 +0000 UTC m=+1.378407453 container remove ec6f1bcb3e88969db5073af3d931110f3d7ab4826ee44ff0ba85a053ab4559f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 04:51:14 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:51:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 04:51:14 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:51:14 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.407s CPU time.
Dec  7 04:51:15 np0005549474 python3.9[143613]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:51:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:15.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:16 np0005549474 python3.9[143698]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:51:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Dec  7 04:51:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:16.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:51:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:16.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:51:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Dec  7 04:51:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095118 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:51:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [NOTICE] 340/095118 (4) : haproxy version is 2.3.17-d1c9119
Dec  7 04:51:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [NOTICE] 340/095118 (4) : path to executable is /usr/local/sbin/haproxy
Dec  7 04:51:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [ALERT] 340/095118 (4) : backend 'backend' has no server available!
Dec  7 04:51:18 np0005549474 python3.9[143878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:51:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:19.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095119 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:51:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:19.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:19 np0005549474 python3[144035]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  7 04:51:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:19] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Dec  7 04:51:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:19] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Dec  7 04:51:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:51:20 np0005549474 python3.9[144187]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:21 np0005549474 python3.9[144341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:21.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:22 np0005549474 python3.9[144419]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:51:22 np0005549474 python3.9[144571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:23.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:23 np0005549474 python3.9[144651]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8n_91c2m recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:23.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:24 np0005549474 python3.9[144803]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:24 np0005549474 python3.9[144881]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Dec  7 04:51:24 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 4.
Dec  7 04:51:24 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:51:24 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.407s CPU time.
Dec  7 04:51:24 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:51:24 np0005549474 podman[144949]: 2025-12-07 09:51:24.928119039 +0000 UTC m=+0.097201945 container create 9a8778d6ded8c07be14f9e3a22999930c8011d2a1b74258631aab150f24e2bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:51:24 np0005549474 podman[144949]: 2025-12-07 09:51:24.856378953 +0000 UTC m=+0.025461869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce87a1684893deca6ff5188c7643cd22449d23b2367189295027fbd88ba15f5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce87a1684893deca6ff5188c7643cd22449d23b2367189295027fbd88ba15f5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce87a1684893deca6ff5188c7643cd22449d23b2367189295027fbd88ba15f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce87a1684893deca6ff5188c7643cd22449d23b2367189295027fbd88ba15f5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:24 np0005549474 podman[144949]: 2025-12-07 09:51:24.997492991 +0000 UTC m=+0.166575917 container init 9a8778d6ded8c07be14f9e3a22999930c8011d2a1b74258631aab150f24e2bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:51:25 np0005549474 podman[144949]: 2025-12-07 09:51:25.007488135 +0000 UTC m=+0.176571031 container start 9a8778d6ded8c07be14f9e3a22999930c8011d2a1b74258631aab150f24e2bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:51:25 np0005549474 bash[144949]: 9a8778d6ded8c07be14f9e3a22999930c8011d2a1b74258631aab150f24e2bcb
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:51:25 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:51:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:51:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:25.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:25 np0005549474 python3.9[145134]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:25.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:26 np0005549474 python3[145287]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  7 04:51:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:51:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:26.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:51:27 np0005549474 python3.9[145440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:51:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:51:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:27.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:28 np0005549474 python3.9[145566]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101086.7790177-431-124715199914880/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:51:28 np0005549474 python3.9[145718]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:51:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:29.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:51:29 np0005549474 python3.9[145845]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101088.4725366-476-140743517474621/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:51:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:29.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:51:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:51:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:51:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:29] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Dec  7 04:51:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:29] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:30 np0005549474 python3.9[146076]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:51:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:51:31 np0005549474 python3.9[146202]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101089.9485688-521-88001399110560/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:51:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:51:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:51:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:51:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:31 np0005549474 podman[146318]: 2025-12-07 09:51:31.376858817 +0000 UTC m=+0.023681599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:31 np0005549474 python3.9[146460]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:31 np0005549474 podman[146318]: 2025-12-07 09:51:31.896120711 +0000 UTC m=+0.542943453 container create 9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_meitner, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 04:51:31 np0005549474 systemd[1]: Started libpod-conmon-9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533.scope.
Dec  7 04:51:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:51:32 np0005549474 podman[146318]: 2025-12-07 09:51:32.013627832 +0000 UTC m=+0.660450594 container init 9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_meitner, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 04:51:32 np0005549474 podman[146318]: 2025-12-07 09:51:32.021192188 +0000 UTC m=+0.668014910 container start 9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 04:51:32 np0005549474 infallible_meitner[146488]: 167 167
Dec  7 04:51:32 np0005549474 systemd[1]: libpod-9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533.scope: Deactivated successfully.
Dec  7 04:51:32 np0005549474 podman[146318]: 2025-12-07 09:51:32.093192662 +0000 UTC m=+0.740015384 container attach 9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:51:32 np0005549474 podman[146318]: 2025-12-07 09:51:32.094101487 +0000 UTC m=+0.740924189 container died 9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_meitner, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:51:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d2995d6e00201a9ced56b131682e303cb91f8c513858afccdb0c8320a8cc9022-merged.mount: Deactivated successfully.
Dec  7 04:51:32 np0005549474 podman[146318]: 2025-12-07 09:51:32.381120554 +0000 UTC m=+1.027943256 container remove 9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_meitner, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:51:32 np0005549474 systemd[1]: libpod-conmon-9f66584a5dfb58d96365743145116fe5670c3fc81ce37d193a70f5be5fd22533.scope: Deactivated successfully.
Dec  7 04:51:32 np0005549474 podman[146614]: 2025-12-07 09:51:32.568015707 +0000 UTC m=+0.076740885 container create 0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 04:51:32 np0005549474 podman[146614]: 2025-12-07 09:51:32.522359715 +0000 UTC m=+0.031084943 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:32 np0005549474 systemd[1]: Started libpod-conmon-0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01.scope.
Dec  7 04:51:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:51:32 np0005549474 python3.9[146608]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101091.3875375-566-244039003425450/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:51:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1fc4316300f83e66478ace8721fe78bae5f8ae9b5cb6df0e42002f11785ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1fc4316300f83e66478ace8721fe78bae5f8ae9b5cb6df0e42002f11785ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1fc4316300f83e66478ace8721fe78bae5f8ae9b5cb6df0e42002f11785ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1fc4316300f83e66478ace8721fe78bae5f8ae9b5cb6df0e42002f11785ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee1fc4316300f83e66478ace8721fe78bae5f8ae9b5cb6df0e42002f11785ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:32 np0005549474 podman[146614]: 2025-12-07 09:51:32.664382638 +0000 UTC m=+0.173107836 container init 0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:51:32 np0005549474 podman[146614]: 2025-12-07 09:51:32.670975909 +0000 UTC m=+0.179701077 container start 0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 04:51:32 np0005549474 podman[146614]: 2025-12-07 09:51:32.676820409 +0000 UTC m=+0.185545617 container attach 0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:51:32 np0005549474 angry_euclid[146631]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:51:32 np0005549474 angry_euclid[146631]: --> All data devices are unavailable
Dec  7 04:51:33 np0005549474 systemd[1]: libpod-0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01.scope: Deactivated successfully.
Dec  7 04:51:33 np0005549474 podman[146614]: 2025-12-07 09:51:33.026671778 +0000 UTC m=+0.535396976 container died 0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:51:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0ee1fc4316300f83e66478ace8721fe78bae5f8ae9b5cb6df0e42002f11785ae-merged.mount: Deactivated successfully.
Dec  7 04:51:33 np0005549474 podman[146614]: 2025-12-07 09:51:33.073820121 +0000 UTC m=+0.582545339 container remove 0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:51:33 np0005549474 systemd[1]: libpod-conmon-0d4a481d88060b03b10aebfa7e5ef000d1a1c1560dd363a106e000d31a180f01.scope: Deactivated successfully.
Dec  7 04:51:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:33 np0005549474 python3.9[146862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.652851362 +0000 UTC m=+0.039181315 container create 15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:51:33 np0005549474 systemd[1]: Started libpod-conmon-15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f.scope.
Dec  7 04:51:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.720543378 +0000 UTC m=+0.106873341 container init 15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.7268282 +0000 UTC m=+0.113158163 container start 15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.634795557 +0000 UTC m=+0.021125560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:33 np0005549474 magical_euclid[146923]: 167 167
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.730291084 +0000 UTC m=+0.116621067 container attach 15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:51:33 np0005549474 systemd[1]: libpod-15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f.scope: Deactivated successfully.
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.73158006 +0000 UTC m=+0.117910023 container died 15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:51:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b0c9d07bf7c0d3dccf262915b79998692489e4b3dd8793fd3dbdc45e5bff0281-merged.mount: Deactivated successfully.
Dec  7 04:51:33 np0005549474 podman[146903]: 2025-12-07 09:51:33.776048169 +0000 UTC m=+0.162378132 container remove 15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:51:33 np0005549474 systemd[1]: libpod-conmon-15948ee65be5db69ba988af413e13108238b8925fdcdf59f5740cf8bcd345a1f.scope: Deactivated successfully.
Dec  7 04:51:33 np0005549474 podman[147025]: 2025-12-07 09:51:33.926183804 +0000 UTC m=+0.040510151 container create e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_solomon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 04:51:33 np0005549474 systemd[1]: Started libpod-conmon-e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372.scope.
Dec  7 04:51:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:51:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807c9b0336e448469b2dff3014f1d6c22bcfe33bbbd6d2d4b75c4b1fe67abca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807c9b0336e448469b2dff3014f1d6c22bcfe33bbbd6d2d4b75c4b1fe67abca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807c9b0336e448469b2dff3014f1d6c22bcfe33bbbd6d2d4b75c4b1fe67abca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807c9b0336e448469b2dff3014f1d6c22bcfe33bbbd6d2d4b75c4b1fe67abca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:34 np0005549474 podman[147025]: 2025-12-07 09:51:33.908475179 +0000 UTC m=+0.022801536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:34 np0005549474 podman[147025]: 2025-12-07 09:51:34.025107016 +0000 UTC m=+0.139433363 container init e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:51:34 np0005549474 podman[147025]: 2025-12-07 09:51:34.032614481 +0000 UTC m=+0.146940828 container start e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_solomon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:51:34 np0005549474 podman[147025]: 2025-12-07 09:51:34.036773135 +0000 UTC m=+0.151099503 container attach e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 04:51:34 np0005549474 python3.9[147088]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101092.9970164-611-59847732290367/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095134 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]: {
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:    "0": [
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:        {
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "devices": [
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "/dev/loop3"
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            ],
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "lv_name": "ceph_lv0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "lv_size": "21470642176",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "name": "ceph_lv0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "tags": {
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.cluster_name": "ceph",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.crush_device_class": "",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.encrypted": "0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.osd_id": "0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.type": "block",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.vdo": "0",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:                "ceph.with_tpm": "0"
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            },
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "type": "block",
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:            "vg_name": "ceph_vg0"
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:        }
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]:    ]
Dec  7 04:51:34 np0005549474 optimistic_solomon[147084]: }
Dec  7 04:51:34 np0005549474 systemd[1]: libpod-e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372.scope: Deactivated successfully.
Dec  7 04:51:34 np0005549474 podman[147025]: 2025-12-07 09:51:34.312710948 +0000 UTC m=+0.427037295 container died e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_solomon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:51:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a807c9b0336e448469b2dff3014f1d6c22bcfe33bbbd6d2d4b75c4b1fe67abca-merged.mount: Deactivated successfully.
Dec  7 04:51:34 np0005549474 podman[147025]: 2025-12-07 09:51:34.353949319 +0000 UTC m=+0.468275666 container remove e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_solomon, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:51:34 np0005549474 systemd[1]: libpod-conmon-e8dd650e455b5cff1caff85baee6e2915d2bfa147959e24da763e40be5fef372.scope: Deactivated successfully.
Dec  7 04:51:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Dec  7 04:51:34 np0005549474 podman[147305]: 2025-12-07 09:51:34.866135388 +0000 UTC m=+0.029131900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:35 np0005549474 python3.9[147365]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:35 np0005549474 podman[147305]: 2025-12-07 09:51:35.176369882 +0000 UTC m=+0.339366374 container create 3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_williamson, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 04:51:35 np0005549474 systemd[1]: Started libpod-conmon-3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1.scope.
Dec  7 04:51:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:51:35 np0005549474 podman[147305]: 2025-12-07 09:51:35.245125766 +0000 UTC m=+0.408122278 container init 3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:51:35 np0005549474 podman[147305]: 2025-12-07 09:51:35.252162029 +0000 UTC m=+0.415158521 container start 3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 04:51:35 np0005549474 podman[147305]: 2025-12-07 09:51:35.25512617 +0000 UTC m=+0.418122662 container attach 3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:51:35 np0005549474 jolly_williamson[147393]: 167 167
Dec  7 04:51:35 np0005549474 systemd[1]: libpod-3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1.scope: Deactivated successfully.
Dec  7 04:51:35 np0005549474 podman[147305]: 2025-12-07 09:51:35.256738344 +0000 UTC m=+0.419734836 container died 3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:51:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-86b5393fae7cbef4c9ac77892da086d2fe85a52ea54dd38134965ca698fb3fb0-merged.mount: Deactivated successfully.
Dec  7 04:51:35 np0005549474 podman[147305]: 2025-12-07 09:51:35.29197449 +0000 UTC m=+0.454970982 container remove 3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_williamson, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 04:51:35 np0005549474 systemd[1]: libpod-conmon-3b5d265ef3642fe2372fdf0d62a219e0cf3ff6da694dfcd2f8101a21ef5b30a1.scope: Deactivated successfully.
Dec  7 04:51:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:35 np0005549474 podman[147417]: 2025-12-07 09:51:35.430347503 +0000 UTC m=+0.035878785 container create 66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:51:35 np0005549474 systemd[1]: Started libpod-conmon-66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e.scope.
Dec  7 04:51:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:51:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7311d4be4d7f16282600222f6c05ef84414bdd387ac616d82161727d79c8fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7311d4be4d7f16282600222f6c05ef84414bdd387ac616d82161727d79c8fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7311d4be4d7f16282600222f6c05ef84414bdd387ac616d82161727d79c8fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7311d4be4d7f16282600222f6c05ef84414bdd387ac616d82161727d79c8fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:51:35 np0005549474 podman[147417]: 2025-12-07 09:51:35.509287956 +0000 UTC m=+0.114819278 container init 66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:51:35 np0005549474 podman[147417]: 2025-12-07 09:51:35.414757566 +0000 UTC m=+0.020288868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:51:35 np0005549474 podman[147417]: 2025-12-07 09:51:35.517049709 +0000 UTC m=+0.122581011 container start 66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gates, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 04:51:35 np0005549474 podman[147417]: 2025-12-07 09:51:35.52038522 +0000 UTC m=+0.125916552 container attach 66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gates, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:51:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:35 np0005549474 python3.9[147574]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:36 np0005549474 lvm[147665]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:51:36 np0005549474 lvm[147665]: VG ceph_vg0 finished
Dec  7 04:51:36 np0005549474 inspiring_gates[147458]: {}
Dec  7 04:51:36 np0005549474 systemd[1]: libpod-66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e.scope: Deactivated successfully.
Dec  7 04:51:36 np0005549474 systemd[1]: libpod-66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e.scope: Consumed 1.078s CPU time.
Dec  7 04:51:36 np0005549474 podman[147669]: 2025-12-07 09:51:36.249492376 +0000 UTC m=+0.024153384 container died 66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:51:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3e7311d4be4d7f16282600222f6c05ef84414bdd387ac616d82161727d79c8fd-merged.mount: Deactivated successfully.
Dec  7 04:51:36 np0005549474 podman[147669]: 2025-12-07 09:51:36.290922321 +0000 UTC m=+0.065583299 container remove 66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 04:51:36 np0005549474 systemd[1]: libpod-conmon-66b53f3877b64aa8c7e2fdb57d7d9bbbacb855d0dc51791779eb83e3acfad48e.scope: Deactivated successfully.
Dec  7 04:51:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:51:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:51:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000d:nfs.cephfs.2: -2
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:51:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:36.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:51:37 np0005549474 python3.9[147849]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84780016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:37.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:38 np0005549474 python3.9[148030]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:51:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:38 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:51:38 np0005549474 python3.9[148184]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:51:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:39 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:39 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:39.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:39 np0005549474 python3.9[148339]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:39] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Dec  7 04:51:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:39] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Dec  7 04:51:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:51:40 np0005549474 python3.9[148494]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095140 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:51:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:40 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:41 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:41 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:41.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:42 np0005549474 python3.9[148646]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:51:42
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:51:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:51:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:51:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:51:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:42 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:43 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:43 np0005549474 python3.9[148801]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:43 np0005549474 ovs-vsctl[148802]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  7 04:51:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:43 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:43.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:44 np0005549474 python3.9[148954]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 04:51:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:44 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:45 np0005549474 python3.9[149110]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:51:45 np0005549474 ovs-vsctl[149112]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  7 04:51:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:45 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:45.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:45 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:45.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:46 np0005549474 python3.9[149262]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:51:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:51:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:46 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c001f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:51:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:47 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:47 np0005549474 python3.9[149417]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:51:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:47 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:47 np0005549474 python3.9[149570]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:48 np0005549474 python3.9[149648]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:51:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:51:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:48 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:48 np0005549474 python3.9[149800]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:49 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:49 np0005549474 python3.9[149879]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:51:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:49.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:49 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:51:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:51:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:49] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  7 04:51:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:49] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec  7 04:51:50 np0005549474 python3.9[150034]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:51:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:50 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:51 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:51.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:51 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:51.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:51 np0005549474 python3.9[150188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:52 np0005549474 python3.9[150266]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:51:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:52 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:53 np0005549474 python3.9[150419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:53 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:53.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:53 np0005549474 python3.9[150498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:53 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84680032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:51:54 np0005549474 python3.9[150650]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:51:54 np0005549474 systemd[1]: Reloading.
Dec  7 04:51:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:54 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:54 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:51:54 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:51:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:55 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:55.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:55 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:55 np0005549474 python3.9[150842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:56 np0005549474 python3.9[150920]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:51:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:56 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84680032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:57.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:51:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:51:57.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:51:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:57 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:57 np0005549474 python3.9[151073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:51:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:51:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:51:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:57 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:57 np0005549474 python3.9[151177]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:51:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:57.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:51:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:51:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:58 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:58 np0005549474 python3.9[151329]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:51:58 np0005549474 systemd[1]: Reloading.
Dec  7 04:51:58 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:51:58 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:51:59 np0005549474 systemd[1]: Starting Create netns directory...
Dec  7 04:51:59 np0005549474 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  7 04:51:59 np0005549474 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  7 04:51:59 np0005549474 systemd[1]: Finished Create netns directory.
Dec  7 04:51:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:59 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:51:59 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:51:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:51:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:51:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:59.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:51:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:59] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:51:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:51:59] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:52:00 np0005549474 python3.9[151525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:52:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:00 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:01 np0005549474 python3.9[151678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:01 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:01 np0005549474 python3.9[151802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101120.5812488-1364-197375158974864/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:01 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:02 np0005549474 python3.9[151954]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:02 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:03 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:03 np0005549474 python3.9[152108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:03 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:04 np0005549474 python3.9[152231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101123.1609294-1439-250839218343270/.source.json _original_basename=._siw5nsl follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:52:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:04 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:05 np0005549474 python3.9[152384]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:52:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:05 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:05 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:06 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:07.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:52:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:07 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:07.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095207 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:52:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:07 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:08 np0005549474 python3.9[152814]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  7 04:52:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:08 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:09 np0005549474 python3.9[152970]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  7 04:52:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:09 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:09.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:09 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:09.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:09] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:52:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:09] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:52:10 np0005549474 python3.9[153123]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  7 04:52:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:52:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:10 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c000bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:11 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:11.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:11 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:11.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:12 np0005549474 python3[153304]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  7 04:52:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:52:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:52:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:52:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:12 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:13 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:13 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c001710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:52:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:14 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:15 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84840027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:15 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:15.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:16 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:52:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:52:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:16 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c001710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:16 np0005549474 podman[153319]: 2025-12-07 09:52:16.84035364 +0000 UTC m=+4.617597106 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec  7 04:52:16 np0005549474 podman[153447]: 2025-12-07 09:52:16.959863102 +0000 UTC m=+0.043823654 container create ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  7 04:52:16 np0005549474 podman[153447]: 2025-12-07 09:52:16.937912865 +0000 UTC m=+0.021873427 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec  7 04:52:16 np0005549474 python3[153304]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec  7 04:52:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:17.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:52:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:52:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:17 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:17 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84840027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:17.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:18 np0005549474 python3.9[153662]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:52:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:52:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:18 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:19 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:52:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:19 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:52:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:19 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:19 np0005549474 python3.9[153818]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:52:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:19.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:19 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:19 np0005549474 python3.9[153894]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:52:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:19] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:52:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:19] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 04:52:20 np0005549474 python3.9[154045]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101139.807538-1703-94079038401490/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:52:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:52:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:20 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84840027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:21 np0005549474 python3.9[154122]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 04:52:21 np0005549474 systemd[1]: Reloading.
Dec  7 04:52:21 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:52:21 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:52:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:21 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:21 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:21 np0005549474 python3.9[154235]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:52:21 np0005549474 systemd[1]: Reloading.
Dec  7 04:52:21 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:52:21 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:52:22 np0005549474 systemd[1]: Starting ovn_controller container...
Dec  7 04:52:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:52:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:22 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:23 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:23 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:23 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:52:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89b86bddc9a1ca596495af920d78a40552cf974aab9b450bcd42a0477d1ff7b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:23.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:23 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:23.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:23 np0005549474 systemd[1]: Started /usr/bin/podman healthcheck run ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d.
Dec  7 04:52:23 np0005549474 podman[154276]: 2025-12-07 09:52:23.834053698 +0000 UTC m=+1.631776135 container init ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  7 04:52:23 np0005549474 ovn_controller[154296]: + sudo -E kolla_set_configs
Dec  7 04:52:23 np0005549474 podman[154276]: 2025-12-07 09:52:23.863664673 +0000 UTC m=+1.661387110 container start ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 04:52:23 np0005549474 edpm-start-podman-container[154276]: ovn_controller
Dec  7 04:52:23 np0005549474 systemd[1]: Created slice User Slice of UID 0.
Dec  7 04:52:23 np0005549474 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  7 04:52:23 np0005549474 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  7 04:52:23 np0005549474 systemd[1]: Starting User Manager for UID 0...
Dec  7 04:52:23 np0005549474 podman[154303]: 2025-12-07 09:52:23.935478847 +0000 UTC m=+0.062377527 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  7 04:52:23 np0005549474 edpm-start-podman-container[154275]: Creating additional drop-in dependency for "ovn_controller" (ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d)
Dec  7 04:52:23 np0005549474 systemd[1]: ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d-6c4d45ebce61f745.service: Main process exited, code=exited, status=1/FAILURE
Dec  7 04:52:23 np0005549474 systemd[1]: ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d-6c4d45ebce61f745.service: Failed with result 'exit-code'.
Dec  7 04:52:23 np0005549474 systemd[1]: Reloading.
Dec  7 04:52:24 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:52:24 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:52:24 np0005549474 systemd[154333]: Queued start job for default target Main User Target.
Dec  7 04:52:24 np0005549474 systemd[154333]: Created slice User Application Slice.
Dec  7 04:52:24 np0005549474 systemd[154333]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  7 04:52:24 np0005549474 systemd[154333]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 04:52:24 np0005549474 systemd[154333]: Reached target Paths.
Dec  7 04:52:24 np0005549474 systemd[154333]: Reached target Timers.
Dec  7 04:52:24 np0005549474 systemd[154333]: Starting D-Bus User Message Bus Socket...
Dec  7 04:52:24 np0005549474 systemd[154333]: Starting Create User's Volatile Files and Directories...
Dec  7 04:52:24 np0005549474 systemd[154333]: Listening on D-Bus User Message Bus Socket.
Dec  7 04:52:24 np0005549474 systemd[154333]: Reached target Sockets.
Dec  7 04:52:24 np0005549474 systemd[154333]: Finished Create User's Volatile Files and Directories.
Dec  7 04:52:24 np0005549474 systemd[154333]: Reached target Basic System.
Dec  7 04:52:24 np0005549474 systemd[154333]: Reached target Main User Target.
Dec  7 04:52:24 np0005549474 systemd[154333]: Startup finished in 170ms.
Dec  7 04:52:24 np0005549474 systemd[1]: Started User Manager for UID 0.
Dec  7 04:52:24 np0005549474 systemd[1]: Started ovn_controller container.
Dec  7 04:52:24 np0005549474 systemd[1]: Started Session c1 of User root.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: INFO:__main__:Validating config file
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: INFO:__main__:Writing out command to execute
Dec  7 04:52:24 np0005549474 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: ++ cat /run_command
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + ARGS=
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + sudo kolla_copy_cacerts
Dec  7 04:52:24 np0005549474 systemd[1]: Started Session c2 of User root.
Dec  7 04:52:24 np0005549474 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + [[ ! -n '' ]]
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + . kolla_extend_start
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + umask 0022
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.3808] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.3813] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.3821] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.3825] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.3828] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  7 04:52:24 np0005549474 kernel: br-int: entered promiscuous mode
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  7 04:52:24 np0005549474 systemd-udevd[154431]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  7 04:52:24 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:24Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.4235] manager: (ovn-e231b2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.4242] manager: (ovn-0e65d7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.4247] manager: (ovn-cbaa5e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec  7 04:52:24 np0005549474 kernel: genev_sys_6081: entered promiscuous mode
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.4374] device (genev_sys_6081): carrier: link connected
Dec  7 04:52:24 np0005549474 NetworkManager[49051]: <info>  [1765101144.4376] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Dec  7 04:52:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:52:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:24 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:24 np0005549474 python3.9[154563]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:52:24 np0005549474 ovs-vsctl[154565]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  7 04:52:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:25.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:25 np0005549474 python3.9[154718]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:52:25 np0005549474 ovs-vsctl[154720]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  7 04:52:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:52:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:26 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:27.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:52:27 np0005549474 python3.9[154874]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:52:27 np0005549474 ovs-vsctl[154876]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  7 04:52:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:27 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:52:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:52:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:27 np0005549474 systemd[1]: session-50.scope: Deactivated successfully.
Dec  7 04:52:27 np0005549474 systemd[1]: session-50.scope: Consumed 53.960s CPU time.
Dec  7 04:52:27 np0005549474 systemd-logind[796]: Session 50 logged out. Waiting for processes to exit.
Dec  7 04:52:27 np0005549474 systemd-logind[796]: Removed session 50.
Dec  7 04:52:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:27 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:27.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:52:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:28 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:29 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:29.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095229 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:52:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:29 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:29.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:29] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Dec  7 04:52:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:29] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Dec  7 04:52:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 852 B/s wr, 3 op/s
Dec  7 04:52:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:30 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:31.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:52:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:32 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:32 np0005549474 systemd-logind[796]: New session 52 of user zuul.
Dec  7 04:52:32 np0005549474 systemd[1]: Started Session 52 of User zuul.
Dec  7 04:52:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:33 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:33 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:33 np0005549474 python3.9[155060]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:52:34 np0005549474 systemd[1]: Stopping User Manager for UID 0...
Dec  7 04:52:34 np0005549474 systemd[154333]: Activating special unit Exit the Session...
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped target Main User Target.
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped target Basic System.
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped target Paths.
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped target Sockets.
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped target Timers.
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  7 04:52:34 np0005549474 systemd[154333]: Closed D-Bus User Message Bus Socket.
Dec  7 04:52:34 np0005549474 systemd[154333]: Stopped Create User's Volatile Files and Directories.
Dec  7 04:52:34 np0005549474 systemd[154333]: Removed slice User Application Slice.
Dec  7 04:52:34 np0005549474 systemd[154333]: Reached target Shutdown.
Dec  7 04:52:34 np0005549474 systemd[154333]: Finished Exit the Session.
Dec  7 04:52:34 np0005549474 systemd[154333]: Reached target Exit the Session.
Dec  7 04:52:34 np0005549474 systemd[1]: user@0.service: Deactivated successfully.
Dec  7 04:52:34 np0005549474 systemd[1]: Stopped User Manager for UID 0.
Dec  7 04:52:34 np0005549474 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  7 04:52:34 np0005549474 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  7 04:52:34 np0005549474 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  7 04:52:34 np0005549474 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  7 04:52:34 np0005549474 systemd[1]: Removed slice User Slice of UID 0.
Dec  7 04:52:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:52:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:34 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:35 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:35 np0005549474 python3.9[155218]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:35.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:35 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:52:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:35.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:52:35 np0005549474 python3.9[155371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:36 np0005549474 python3.9[155523]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:52:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:37.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:52:37 np0005549474 python3.9[155727]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:52:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:37.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:52:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:52:37 np0005549474 python3.9[155909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:37.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:37 np0005549474 podman[156049]: 2025-12-07 09:52:37.977841007 +0000 UTC m=+0.074143908 container create 477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 04:52:38 np0005549474 systemd[1]: Started libpod-conmon-477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5.scope.
Dec  7 04:52:38 np0005549474 podman[156049]: 2025-12-07 09:52:37.926412778 +0000 UTC m=+0.022715689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:52:38 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:38 np0005549474 podman[156049]: 2025-12-07 09:52:38.065504012 +0000 UTC m=+0.161806923 container init 477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:52:38 np0005549474 podman[156049]: 2025-12-07 09:52:38.072287706 +0000 UTC m=+0.168590607 container start 477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:52:38 np0005549474 podman[156049]: 2025-12-07 09:52:38.075935516 +0000 UTC m=+0.172238427 container attach 477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:52:38 np0005549474 nifty_moser[156066]: 167 167
Dec  7 04:52:38 np0005549474 systemd[1]: libpod-477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5.scope: Deactivated successfully.
Dec  7 04:52:38 np0005549474 conmon[156066]: conmon 477547725c2ea0a50724 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5.scope/container/memory.events
Dec  7 04:52:38 np0005549474 podman[156049]: 2025-12-07 09:52:38.078231558 +0000 UTC m=+0.174534469 container died 477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:52:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-55be96830da9686b76c978db8326df48abe65378c2592c7d7c4b47a98448dbc4-merged.mount: Deactivated successfully.
Dec  7 04:52:38 np0005549474 podman[156049]: 2025-12-07 09:52:38.118543355 +0000 UTC m=+0.214846246 container remove 477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_moser, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:52:38 np0005549474 systemd[1]: libpod-conmon-477547725c2ea0a5072402eb6088f6ba8d2aac30dc10492ed19c6d3638d882f5.scope: Deactivated successfully.
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.318139655 +0000 UTC m=+0.075326291 container create 89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 04:52:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:52:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:52:38 np0005549474 systemd[1]: Started libpod-conmon-89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6.scope.
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.2705483 +0000 UTC m=+0.027734926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:52:38 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22740820de13e3872e978a18da4e67e36ff5fef7339c959ab656d9732fa129e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22740820de13e3872e978a18da4e67e36ff5fef7339c959ab656d9732fa129e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22740820de13e3872e978a18da4e67e36ff5fef7339c959ab656d9732fa129e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22740820de13e3872e978a18da4e67e36ff5fef7339c959ab656d9732fa129e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:38 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22740820de13e3872e978a18da4e67e36ff5fef7339c959ab656d9732fa129e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.420719576 +0000 UTC m=+0.177906192 container init 89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_ramanujan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.427449299 +0000 UTC m=+0.184635935 container start 89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.431696195 +0000 UTC m=+0.188882801 container attach 89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:52:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:52:38 np0005549474 kind_ramanujan[156104]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:52:38 np0005549474 kind_ramanujan[156104]: --> All data devices are unavailable
Dec  7 04:52:38 np0005549474 systemd[1]: libpod-89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6.scope: Deactivated successfully.
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.765437414 +0000 UTC m=+0.522624050 container died 89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_ramanujan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:52:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:38 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-22740820de13e3872e978a18da4e67e36ff5fef7339c959ab656d9732fa129e8-merged.mount: Deactivated successfully.
Dec  7 04:52:38 np0005549474 podman[156088]: 2025-12-07 09:52:38.837161845 +0000 UTC m=+0.594348431 container remove 89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:52:38 np0005549474 systemd[1]: libpod-conmon-89325c61018807dbcfdc4f84ae70eb0b58cb999edab678ab0981eb26854c2fc6.scope: Deactivated successfully.
Dec  7 04:52:39 np0005549474 python3.9[156255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:52:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:39 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.314003058 +0000 UTC m=+0.020291354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.468602154 +0000 UTC m=+0.174890420 container create f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cohen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:52:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:39.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:39 np0005549474 systemd[1]: Started libpod-conmon-f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e.scope.
Dec  7 04:52:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.626932891 +0000 UTC m=+0.333221187 container init f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cohen, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.633174911 +0000 UTC m=+0.339463177 container start f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cohen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.636363588 +0000 UTC m=+0.342651854 container attach f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cohen, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:52:39 np0005549474 jovial_cohen[156444]: 167 167
Dec  7 04:52:39 np0005549474 systemd[1]: libpod-f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e.scope: Deactivated successfully.
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.638436054 +0000 UTC m=+0.344724320 container died f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:52:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8d8b688f84d106d15328ad8e65831d9990783c514445357fdf7c03cc64334cb1-merged.mount: Deactivated successfully.
Dec  7 04:52:39 np0005549474 podman[156375]: 2025-12-07 09:52:39.683700655 +0000 UTC m=+0.389988921 container remove f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 04:52:39 np0005549474 systemd[1]: libpod-conmon-f521100257ac3c848cbe076caa87226727a332a44f13862e136ffa7d5daa4e4e.scope: Deactivated successfully.
Dec  7 04:52:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:39 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:39.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:39 np0005549474 podman[156516]: 2025-12-07 09:52:39.860309341 +0000 UTC m=+0.058004180 container create da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:52:39 np0005549474 systemd[1]: Started libpod-conmon-da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772.scope.
Dec  7 04:52:39 np0005549474 podman[156516]: 2025-12-07 09:52:39.831329852 +0000 UTC m=+0.029024751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:52:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228a914b0f5407514c61c94450b7a5e9e2ef4c0208bb945c5ef57f2581176981/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228a914b0f5407514c61c94450b7a5e9e2ef4c0208bb945c5ef57f2581176981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228a914b0f5407514c61c94450b7a5e9e2ef4c0208bb945c5ef57f2581176981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228a914b0f5407514c61c94450b7a5e9e2ef4c0208bb945c5ef57f2581176981/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:39 np0005549474 podman[156516]: 2025-12-07 09:52:39.94630429 +0000 UTC m=+0.143999149 container init da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 04:52:39 np0005549474 podman[156516]: 2025-12-07 09:52:39.95258316 +0000 UTC m=+0.150277999 container start da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:52:39 np0005549474 podman[156516]: 2025-12-07 09:52:39.95585099 +0000 UTC m=+0.153545829 container attach da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:52:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:39] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Dec  7 04:52:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:39] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Dec  7 04:52:40 np0005549474 python3.9[156560]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]: {
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:    "0": [
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:        {
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "devices": [
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "/dev/loop3"
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            ],
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "lv_name": "ceph_lv0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "lv_size": "21470642176",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "name": "ceph_lv0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "tags": {
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.cluster_name": "ceph",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.crush_device_class": "",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.encrypted": "0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.osd_id": "0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.type": "block",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.vdo": "0",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:                "ceph.with_tpm": "0"
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            },
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "type": "block",
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:            "vg_name": "ceph_vg0"
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:        }
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]:    ]
Dec  7 04:52:40 np0005549474 goofy_shaw[156563]: }
Dec  7 04:52:40 np0005549474 systemd[1]: libpod-da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772.scope: Deactivated successfully.
Dec  7 04:52:40 np0005549474 podman[156516]: 2025-12-07 09:52:40.291722378 +0000 UTC m=+0.489417257 container died da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:52:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-228a914b0f5407514c61c94450b7a5e9e2ef4c0208bb945c5ef57f2581176981-merged.mount: Deactivated successfully.
Dec  7 04:52:40 np0005549474 podman[156516]: 2025-12-07 09:52:40.36683755 +0000 UTC m=+0.564532419 container remove da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:52:40 np0005549474 systemd[1]: libpod-conmon-da0020001b311e29cb99609fda175d53e3cf6cf6835f6be6514b5cd04023e772.scope: Deactivated successfully.
Dec  7 04:52:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:52:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:40 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84680041a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.077840384 +0000 UTC m=+0.044194314 container create 63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 04:52:41 np0005549474 systemd[1]: Started libpod-conmon-63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4.scope.
Dec  7 04:52:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.148497716 +0000 UTC m=+0.114851616 container init 63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.058763975 +0000 UTC m=+0.025117875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.161031947 +0000 UTC m=+0.127385837 container start 63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.164946934 +0000 UTC m=+0.131300814 container attach 63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tesla, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:52:41 np0005549474 pedantic_tesla[156765]: 167 167
Dec  7 04:52:41 np0005549474 systemd[1]: libpod-63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4.scope: Deactivated successfully.
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.167626577 +0000 UTC m=+0.133980457 container died 63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:52:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3816e0fdfa4df9cb930a1f92f436b2133dc22dc170c779c1ad58b4a2e617bb1f-merged.mount: Deactivated successfully.
Dec  7 04:52:41 np0005549474 podman[156705]: 2025-12-07 09:52:41.207503371 +0000 UTC m=+0.173857261 container remove 63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 04:52:41 np0005549474 systemd[1]: libpod-conmon-63d3ad561c1c70d37c256353b960356a093dddf47b70cecdf9a2a32bde6929c4.scope: Deactivated successfully.
Dec  7 04:52:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:41 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:41 np0005549474 podman[156805]: 2025-12-07 09:52:41.437810048 +0000 UTC m=+0.065698179 container create 2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 04:52:41 np0005549474 systemd[1]: Started libpod-conmon-2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3.scope.
Dec  7 04:52:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:52:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb86d007742934dc06bb700a6a969d5987a5f5f0ec83fa884c2c07081c24a132/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb86d007742934dc06bb700a6a969d5987a5f5f0ec83fa884c2c07081c24a132/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb86d007742934dc06bb700a6a969d5987a5f5f0ec83fa884c2c07081c24a132/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb86d007742934dc06bb700a6a969d5987a5f5f0ec83fa884c2c07081c24a132/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:52:41 np0005549474 podman[156805]: 2025-12-07 09:52:41.404102191 +0000 UTC m=+0.031990342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:52:41 np0005549474 podman[156805]: 2025-12-07 09:52:41.50591991 +0000 UTC m=+0.133808031 container init 2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:52:41 np0005549474 podman[156805]: 2025-12-07 09:52:41.514063642 +0000 UTC m=+0.141951773 container start 2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamport, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 04:52:41 np0005549474 podman[156805]: 2025-12-07 09:52:41.518605625 +0000 UTC m=+0.146493726 container attach 2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 04:52:41 np0005549474 python3.9[156888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:41 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:41.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:42 np0005549474 lvm[157024]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:52:42 np0005549474 lvm[157024]: VG ceph_vg0 finished
Dec  7 04:52:42 np0005549474 interesting_lamport[156857]: {}
Dec  7 04:52:42 np0005549474 systemd[1]: libpod-2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3.scope: Deactivated successfully.
Dec  7 04:52:42 np0005549474 systemd[1]: libpod-2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3.scope: Consumed 1.034s CPU time.
Dec  7 04:52:42 np0005549474 podman[156805]: 2025-12-07 09:52:42.193382713 +0000 UTC m=+0.821270814 container died 2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:52:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-eb86d007742934dc06bb700a6a969d5987a5f5f0ec83fa884c2c07081c24a132-merged.mount: Deactivated successfully.
Dec  7 04:52:42 np0005549474 podman[156805]: 2025-12-07 09:52:42.238613043 +0000 UTC m=+0.866501144 container remove 2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:52:42 np0005549474 systemd[1]: libpod-conmon-2ad4b92f231d324f38b1435db7cde0f5e9cd15fc5a95d584c51ddbe1a79c1cf3.scope: Deactivated successfully.
Dec  7 04:52:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:52:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:52:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:52:42
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'volumes', '.nfs', 'default.rgw.meta']
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:52:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:52:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:52:42 np0005549474 python3.9[157094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101161.0598567-218-77135919555368/.source follow=False _original_basename=haproxy.j2 checksum=cc5e97ea900947bff0c19d73b88d99840e041f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:52:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:52:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:42 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:43 np0005549474 python3.9[157271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:43 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0027b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:52:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:52:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:43.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:52:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:43 np0005549474 python3.9[157393]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101162.6565516-263-73692762383418/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:43 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:52:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:44 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:44 np0005549474 python3.9[157545]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:52:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:45 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:45.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:45 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:45 np0005549474 python3.9[157631]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:52:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:45.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:46 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:47.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:52:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:47 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:47 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:52:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:47.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:52:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:48 np0005549474 python3.9[157788]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:52:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:48 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:49 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 04:52:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 04:52:49 np0005549474 python3.9[157943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:49 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:49.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:49] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:52:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:49] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:52:50 np0005549474 python3.9[158064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101169.042379-374-79166245541617/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:52:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:50 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:50 np0005549474 python3.9[158214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:51 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:51 np0005549474 python3.9[158337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101170.4473817-374-82384209083625/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:51 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:51.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:52 np0005549474 python3.9[158487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:52 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:53 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:53 np0005549474 python3.9[158610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101172.3761413-506-97630381257760/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:53.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:53 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:53.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:53 np0005549474 python3.9[158760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:54 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:54Z|00025|memory|INFO|16384 kB peak resident set size after 29.9 seconds
Dec  7 04:52:54 np0005549474 ovn_controller[154296]: 2025-12-07T09:52:54Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Dec  7 04:52:54 np0005549474 podman[158826]: 2025-12-07 09:52:54.323053717 +0000 UTC m=+0.120107769 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec  7 04:52:54 np0005549474 python3.9[158908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101173.5063689-506-187261110147309/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:52:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:54 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:55 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:55 np0005549474 python3.9[159060]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:52:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:55 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:52:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:55.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:52:56 np0005549474 python3.9[159214]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:56 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:57.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:52:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:52:57.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:52:57 np0005549474 python3.9[159367]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:57 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:52:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:52:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:52:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:52:57 np0005549474 python3.9[159446]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:57 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:57.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:58 np0005549474 python3.9[159623]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:52:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:52:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:52:58 np0005549474 python3.9[159701]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:52:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:58 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:59 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:59.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:59 np0005549474 python3.9[159855]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:52:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:52:59 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:52:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:52:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:52:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:59.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:52:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:59] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:52:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:52:59] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:53:00 np0005549474 python3.9[160007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:53:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:53:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:00 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:00 np0005549474 python3.9[160085]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:01 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:01.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:01 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:01 np0005549474 python3.9[160239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:53:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:01.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:02 np0005549474 python3.9[160317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:02 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:03 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:03 np0005549474 python3.9[160470]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:03 np0005549474 systemd[1]: Reloading.
Dec  7 04:53:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:03.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:03 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:53:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:03 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:53:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:03 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:03.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:04 np0005549474 python3.9[160660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:53:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:53:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:04 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:05 np0005549474 python3.9[160739]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:05 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 04:53:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:05.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 04:53:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:05 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:05 np0005549474 python3.9[160892]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:53:06 np0005549474 python3.9[160970]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:06 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:07.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:53:07 np0005549474 python3.9[161123]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:07 np0005549474 systemd[1]: Reloading.
Dec  7 04:53:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:07 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:07 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:53:07 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:53:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:07 np0005549474 systemd[1]: Starting Create netns directory...
Dec  7 04:53:07 np0005549474 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  7 04:53:07 np0005549474 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  7 04:53:07 np0005549474 systemd[1]: Finished Create netns directory.
Dec  7 04:53:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:07 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:08 np0005549474 python3.9[161316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:53:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:08 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:09 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:09 np0005549474 python3.9[161470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:53:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:09.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:09 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:09 np0005549474 python3.9[161593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101188.8781269-959-58369529716070/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:53:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:09] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:53:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:09] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec  7 04:53:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:53:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:10 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:11 np0005549474 python3.9[161746]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:53:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:11 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c003ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:11 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:11.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:11 np0005549474 python3.9[161899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:53:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:53:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:53:12 np0005549474 python3.9[162022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101191.5566108-1034-159424340749874/.source.json _original_basename=.744dptqp follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:12 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:13 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:13.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:13 np0005549474 python3.9[162178]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:13 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454000fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:13.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:53:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:14 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:15 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:15.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:15 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 04:53:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 04:53:16 np0005549474 python3.9[162607]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  7 04:53:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:16 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454000fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:17.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:53:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:17.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:53:17 np0005549474 python3.9[162760]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  7 04:53:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:17 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:17 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:17.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:18 np0005549474 python3.9[162938]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  7 04:53:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:18 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:19 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454001e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:19 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:19.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:19 np0005549474 radosgw[96353]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec  7 04:53:19 np0005549474 radosgw[96353]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  7 04:53:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:19] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:53:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:19] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec  7 04:53:19 np0005549474 radosgw[96353]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  7 04:53:20 np0005549474 python3[163119]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  7 04:53:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:53:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:20 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:21 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:21.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:21 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454001e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:21.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:53:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:22 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:23 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:23.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:23 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec  7 04:53:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:24 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095325 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:53:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:25 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:25.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec  7 04:53:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:26 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:27.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:53:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:27.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:53:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:27.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:53:27 np0005549474 podman[163203]: 2025-12-07 09:53:27.267790074 +0000 UTC m=+2.080841137 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  7 04:53:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:27 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:53:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:53:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:27.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:27 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:27.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec  7 04:53:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:28 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:29 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:29.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:29 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:29.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:29] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:53:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:29] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:53:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec  7 04:53:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:30 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:30 np0005549474 podman[163132]: 2025-12-07 09:53:30.820536185 +0000 UTC m=+10.432720123 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 04:53:30 np0005549474 podman[163315]: 2025-12-07 09:53:30.951636789 +0000 UTC m=+0.041055521 container create cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  7 04:53:30 np0005549474 podman[163315]: 2025-12-07 09:53:30.928632447 +0000 UTC m=+0.018051199 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 04:53:30 np0005549474 python3[163119]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 04:53:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:31 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:32 np0005549474 python3.9[163507]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:53:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec  7 04:53:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:32 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:33 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:33 np0005549474 python3.9[163663]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:33.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:33 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:33 np0005549474 python3.9[163739]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:53:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:33.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:34 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:53:34 np0005549474 python3.9[163890]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101213.9142842-1298-237187865686585/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:53:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 511 B/s wr, 154 op/s
Dec  7 04:53:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:34 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:35 np0005549474 python3.9[163967]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 04:53:35 np0005549474 systemd[1]: Reloading.
Dec  7 04:53:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:35 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:35 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:53:35 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:53:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:35.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:35 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:35.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:36 np0005549474 python3.9[164080]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:36 np0005549474 systemd[1]: Reloading.
Dec  7 04:53:36 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:53:36 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:53:36 np0005549474 systemd[1]: Starting ovn_metadata_agent container...
Dec  7 04:53:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0dfd88dfa0bc7139f6ee6222d4960c58456e3ffd0f306f9c671b2f0b029891/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0dfd88dfa0bc7139f6ee6222d4960c58456e3ffd0f306f9c671b2f0b029891/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:36 np0005549474 systemd[1]: Started /usr/bin/podman healthcheck run cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85.
Dec  7 04:53:36 np0005549474 podman[164122]: 2025-12-07 09:53:36.671809717 +0000 UTC m=+0.114901127 container init cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  7 04:53:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + sudo -E kolla_set_configs
Dec  7 04:53:36 np0005549474 podman[164122]: 2025-12-07 09:53:36.700570465 +0000 UTC m=+0.143661855 container start cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  7 04:53:36 np0005549474 edpm-start-podman-container[164122]: ovn_metadata_agent
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Validating config file
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Copying service configuration files
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Writing out command to execute
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  7 04:53:36 np0005549474 edpm-start-podman-container[164121]: Creating additional drop-in dependency for "ovn_metadata_agent" (cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85)
Dec  7 04:53:36 np0005549474 podman[164145]: 2025-12-07 09:53:36.756745934 +0000 UTC m=+0.047600608 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: ++ cat /run_command
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + CMD=neutron-ovn-metadata-agent
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + ARGS=
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + sudo kolla_copy_cacerts
Dec  7 04:53:36 np0005549474 systemd[1]: Reloading.
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: Running command: 'neutron-ovn-metadata-agent'
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + [[ ! -n '' ]]
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + . kolla_extend_start
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + umask 0022
Dec  7 04:53:36 np0005549474 ovn_metadata_agent[164137]: + exec neutron-ovn-metadata-agent
Dec  7 04:53:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:36 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:36 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:53:36 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:37.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:37.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:37.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:53:37 np0005549474 systemd[1]: Started ovn_metadata_agent container.
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:53:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:37.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:37 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:37.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:38 np0005549474 systemd[1]: session-52.scope: Deactivated successfully.
Dec  7 04:53:38 np0005549474 systemd[1]: session-52.scope: Consumed 52.531s CPU time.
Dec  7 04:53:38 np0005549474 systemd-logind[796]: Session 52 logged out. Waiting for processes to exit.
Dec  7 04:53:38 np0005549474 systemd-logind[796]: Removed session 52.
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.559 164143 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.559 164143 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.559 164143 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.560 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.560 164143 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.560 164143 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.560 164143 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.560 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.561 164143 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.562 164143 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.563 164143 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.563 164143 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.563 164143 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.563 164143 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.563 164143 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.563 164143 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.564 164143 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.565 164143 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.565 164143 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.565 164143 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.565 164143 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.565 164143 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.565 164143 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.566 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.567 164143 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.568 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.569 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.569 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.569 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.569 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.569 164143 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.569 164143 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.570 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.571 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.571 164143 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.571 164143 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.571 164143 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.572 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.573 164143 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.574 164143 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.575 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.576 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.576 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.576 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.576 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.576 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.576 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.577 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.578 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.578 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.578 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.578 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.578 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.578 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.579 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.579 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.579 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.579 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.579 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.579 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.580 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.580 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.580 164143 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.580 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.580 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.580 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.581 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.582 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.583 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.584 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.585 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.586 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.586 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.586 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.586 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.586 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.586 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.587 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.588 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.589 164143 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.590 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.590 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.590 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.590 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.590 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.591 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.592 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.593 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.594 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.595 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.596 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.597 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.598 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.599 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.600 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.601 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.602 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.602 164143 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.602 164143 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.610 164143 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.611 164143 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.611 164143 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.611 164143 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.611 164143 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.625 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 8da81261-a5d6-4df8-aa54-d9c0c8f72a67 (UUID: 8da81261-a5d6-4df8-aa54-d9c0c8f72a67) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.646 164143 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.646 164143 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.646 164143 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.647 164143 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.649 164143 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.655 164143 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.662 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '8da81261-a5d6-4df8-aa54-d9c0c8f72a67'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], external_ids={}, name=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, nb_cfg_timestamp=1765101152414, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.663 164143 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd8beb4cf70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.663 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.664 164143 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.664 164143 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.664 164143 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.669 164143 DEBUG oslo_service.service [-] Started child 164276 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.673 164143 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmptqlhe5on/privsep.sock']#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.673 164276 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-427487'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  7 04:53:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.709 164276 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.710 164276 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.710 164276 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.713 164276 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.720 164276 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  7 04:53:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:38.727 164276 INFO eventlet.wsgi.server [-] (164276) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  7 04:53:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:38 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:39 np0005549474 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  7 04:53:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:39 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.355 164143 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.356 164143 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptqlhe5on/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.243 164283 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.247 164283 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.249 164283 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.249 164283 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164283#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.358 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[144aefd2-e14f-4a9a-a8bd-ae352f17308b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 04:53:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:39 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.870 164283 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.870 164283 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:53:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:39.870 164283 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:53:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:39.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:39] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:53:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:39] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:53:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:40 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.418 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[88435976-1878-4273-a5a4-90acc7c3019c]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.421 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, column=external_ids, values=({'neutron:ovn-metadata-id': '29d4c112-eb1c-57a4-ab26-92fa7ac095b6'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.447 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.470 164143 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.470 164143 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.470 164143 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.470 164143 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.470 164143 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.471 164143 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.472 164143 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.473 164143 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.474 164143 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.475 164143 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.476 164143 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.477 164143 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.478 164143 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.479 164143 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.480 164143 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.481 164143 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.482 164143 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.483 164143 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.484 164143 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.485 164143 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.486 164143 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.487 164143 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.488 164143 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.489 164143 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.490 164143 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.491 164143 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.492 164143 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.493 164143 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.494 164143 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.495 164143 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.496 164143 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.497 164143 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.498 164143 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.499 164143 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.500 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.501 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.502 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.503 164143 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.504 164143 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.504 164143 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.504 164143 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 04:53:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:53:40.504 164143 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  7 04:53:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:53:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:40 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:41 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8484004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:41 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:53:42
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', '.nfs', 'cephfs.cephfs.data', '.mgr']
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:53:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:53:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:53:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:53:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:42 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f848c00a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:43 np0005549474 systemd-logind[796]: New session 53 of user zuul.
Dec  7 04:53:43 np0005549474 systemd[1]: Started Session 53 of User zuul.
Dec  7 04:53:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:43 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:53:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:53:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:43 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:43.980687457 +0000 UTC m=+0.022564051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:44.085952703 +0000 UTC m=+0.127829297 container create 151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_almeida, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:53:44 np0005549474 systemd[1]: Started libpod-conmon-151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1.scope.
Dec  7 04:53:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:44.16202169 +0000 UTC m=+0.203898304 container init 151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:44.169138602 +0000 UTC m=+0.211015166 container start 151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:44.173258483 +0000 UTC m=+0.215135097 container attach 151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:53:44 np0005549474 reverent_almeida[164635]: 167 167
Dec  7 04:53:44 np0005549474 systemd[1]: libpod-151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1.scope: Deactivated successfully.
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:44.177461198 +0000 UTC m=+0.219337792 container died 151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_almeida, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:53:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:53:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:53:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6369065ef5f4586d5d775a2d7a43e841032166db1221177c4c64c9fe08caa276-merged.mount: Deactivated successfully.
Dec  7 04:53:44 np0005549474 python3.9[164617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:53:44 np0005549474 podman[164618]: 2025-12-07 09:53:44.296815294 +0000 UTC m=+0.338691868 container remove 151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_almeida, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:53:44 np0005549474 systemd[1]: libpod-conmon-151f7da121ad5ae9a51027ea893892c0f336bdba391871230ecf723c543178d1.scope: Deactivated successfully.
Dec  7 04:53:44 np0005549474 podman[164663]: 2025-12-07 09:53:44.463326376 +0000 UTC m=+0.042372317 container create ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:53:44 np0005549474 systemd[1]: Started libpod-conmon-ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264.scope.
Dec  7 04:53:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46a837accce6fd60b047a5589dead1f5d93a9159f9a9dc311e519f54ad4da6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46a837accce6fd60b047a5589dead1f5d93a9159f9a9dc311e519f54ad4da6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46a837accce6fd60b047a5589dead1f5d93a9159f9a9dc311e519f54ad4da6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46a837accce6fd60b047a5589dead1f5d93a9159f9a9dc311e519f54ad4da6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46a837accce6fd60b047a5589dead1f5d93a9159f9a9dc311e519f54ad4da6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:44 np0005549474 podman[164663]: 2025-12-07 09:53:44.531761016 +0000 UTC m=+0.110806977 container init ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:53:44 np0005549474 podman[164663]: 2025-12-07 09:53:44.443329855 +0000 UTC m=+0.022375826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:53:44 np0005549474 podman[164663]: 2025-12-07 09:53:44.539264919 +0000 UTC m=+0.118310860 container start ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:53:44 np0005549474 podman[164663]: 2025-12-07 09:53:44.566034622 +0000 UTC m=+0.145080583 container attach ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcclintock, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:53:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:53:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:44 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:44 np0005549474 xenodochial_mcclintock[164680]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:53:44 np0005549474 xenodochial_mcclintock[164680]: --> All data devices are unavailable
Dec  7 04:53:44 np0005549474 systemd[1]: libpod-ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264.scope: Deactivated successfully.
Dec  7 04:53:44 np0005549474 podman[164663]: 2025-12-07 09:53:44.879959639 +0000 UTC m=+0.459005600 container died ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:53:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e46a837accce6fd60b047a5589dead1f5d93a9159f9a9dc311e519f54ad4da6c-merged.mount: Deactivated successfully.
Dec  7 04:53:45 np0005549474 podman[164663]: 2025-12-07 09:53:45.149284651 +0000 UTC m=+0.728330592 container remove ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mcclintock, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:53:45 np0005549474 systemd[1]: libpod-conmon-ea9ec9117cdc87ac276cc1dd3a1235178d00e9490fff81e2c3d899168c773264.scope: Deactivated successfully.
Dec  7 04:53:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:45 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:45 np0005549474 python3.9[164909]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:53:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.651038737 +0000 UTC m=+0.033870657 container create 22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bose, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 04:53:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095345 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:53:45 np0005549474 systemd[1]: Started libpod-conmon-22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f.scope.
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.635470715 +0000 UTC m=+0.018302665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:53:45 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:45 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.819752628 +0000 UTC m=+0.202584568 container init 22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bose, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.826096329 +0000 UTC m=+0.208928249 container start 22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.829367407 +0000 UTC m=+0.212199337 container attach 22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:53:45 np0005549474 bold_bose[165005]: 167 167
Dec  7 04:53:45 np0005549474 systemd[1]: libpod-22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f.scope: Deactivated successfully.
Dec  7 04:53:45 np0005549474 conmon[165005]: conmon 22f232614e6a9b3b4e8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f.scope/container/memory.events
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.832159503 +0000 UTC m=+0.214991423 container died 22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:53:45 np0005549474 systemd[1]: var-lib-containers-storage-overlay-086db4c5a9b54b828f440c273ad5e16c31e8ecc4d922b5768f3e61d95beaca4e-merged.mount: Deactivated successfully.
Dec  7 04:53:45 np0005549474 podman[164988]: 2025-12-07 09:53:45.865832783 +0000 UTC m=+0.248664703 container remove 22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bose, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:53:45 np0005549474 systemd[1]: libpod-conmon-22f232614e6a9b3b4e8c7135ae6d693964e1e762eab0a96bf7542665333e664f.scope: Deactivated successfully.
Dec  7 04:53:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:46 np0005549474 podman[165029]: 2025-12-07 09:53:46.083235311 +0000 UTC m=+0.065483582 container create ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_clarke, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:53:46 np0005549474 systemd[1]: Started libpod-conmon-ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af.scope.
Dec  7 04:53:46 np0005549474 podman[165029]: 2025-12-07 09:53:46.046146598 +0000 UTC m=+0.028394919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:53:46 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c27e701e3ca25852f72b4b93e1bbfdfc00270502a1597c1426e550e0feaf90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c27e701e3ca25852f72b4b93e1bbfdfc00270502a1597c1426e550e0feaf90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c27e701e3ca25852f72b4b93e1bbfdfc00270502a1597c1426e550e0feaf90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c27e701e3ca25852f72b4b93e1bbfdfc00270502a1597c1426e550e0feaf90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:46 np0005549474 podman[165029]: 2025-12-07 09:53:46.625344336 +0000 UTC m=+0.607592587 container init ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:53:46 np0005549474 podman[165029]: 2025-12-07 09:53:46.633123517 +0000 UTC m=+0.615371748 container start ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:53:46 np0005549474 podman[165029]: 2025-12-07 09:53:46.63689094 +0000 UTC m=+0.619139201 container attach ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_clarke, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 04:53:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Dec  7 04:53:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:46 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:46 np0005549474 determined_clarke[165046]: {
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:    "0": [
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:        {
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "devices": [
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "/dev/loop3"
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            ],
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "lv_name": "ceph_lv0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "lv_size": "21470642176",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "name": "ceph_lv0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "tags": {
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.cluster_name": "ceph",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.crush_device_class": "",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.encrypted": "0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.osd_id": "0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.type": "block",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.vdo": "0",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:                "ceph.with_tpm": "0"
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            },
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "type": "block",
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:            "vg_name": "ceph_vg0"
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:        }
Dec  7 04:53:46 np0005549474 determined_clarke[165046]:    ]
Dec  7 04:53:46 np0005549474 determined_clarke[165046]: }
Dec  7 04:53:46 np0005549474 systemd[1]: libpod-ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af.scope: Deactivated successfully.
Dec  7 04:53:46 np0005549474 podman[165029]: 2025-12-07 09:53:46.931304029 +0000 UTC m=+0.913552280 container died ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:53:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:47.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:53:47 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d1c27e701e3ca25852f72b4b93e1bbfdfc00270502a1597c1426e550e0feaf90-merged.mount: Deactivated successfully.
Dec  7 04:53:47 np0005549474 podman[165029]: 2025-12-07 09:53:47.026603686 +0000 UTC m=+1.008851937 container remove ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:53:47 np0005549474 systemd[1]: libpod-conmon-ad6fe41c649b5096f18122d2ad7e9937a878114257b990267dc03031777ad7af.scope: Deactivated successfully.
Dec  7 04:53:47 np0005549474 python3.9[165180]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 04:53:47 np0005549474 systemd[1]: Reloading.
Dec  7 04:53:47 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:53:47 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:53:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:47 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:47.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:47 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:47 np0005549474 podman[165346]: 2025-12-07 09:53:47.879157524 +0000 UTC m=+0.118060092 container create f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec  7 04:53:47 np0005549474 podman[165346]: 2025-12-07 09:53:47.834783305 +0000 UTC m=+0.073685883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:53:47 np0005549474 systemd[1]: Started libpod-conmon-f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf.scope.
Dec  7 04:53:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:53:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:47.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:53:47 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:48 np0005549474 podman[165346]: 2025-12-07 09:53:48.158002993 +0000 UTC m=+0.396905591 container init f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:53:48 np0005549474 podman[165346]: 2025-12-07 09:53:48.164477258 +0000 UTC m=+0.403379826 container start f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:53:48 np0005549474 jolly_engelbart[165415]: 167 167
Dec  7 04:53:48 np0005549474 systemd[1]: libpod-f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf.scope: Deactivated successfully.
Dec  7 04:53:48 np0005549474 conmon[165415]: conmon f7c9683ac3cc38496de6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf.scope/container/memory.events
Dec  7 04:53:48 np0005549474 podman[165346]: 2025-12-07 09:53:48.178659012 +0000 UTC m=+0.417561580 container attach f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 04:53:48 np0005549474 podman[165346]: 2025-12-07 09:53:48.178933199 +0000 UTC m=+0.417835767 container died f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:53:48 np0005549474 systemd[1]: var-lib-containers-storage-overlay-dd2738275e306c60122671a7d34d93f155258b3ab955619fb4909c08ebc5d13f-merged.mount: Deactivated successfully.
Dec  7 04:53:48 np0005549474 python3.9[165494]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:53:48 np0005549474 network[165521]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:53:48 np0005549474 network[165522]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:53:48 np0005549474 network[165523]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:53:48 np0005549474 podman[165346]: 2025-12-07 09:53:48.496126164 +0000 UTC m=+0.735028742 container remove f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:53:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Dec  7 04:53:48 np0005549474 podman[165539]: 2025-12-07 09:53:48.700330505 +0000 UTC m=+0.037612857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:53:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:48 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:48 np0005549474 podman[165539]: 2025-12-07 09:53:48.92316458 +0000 UTC m=+0.260446912 container create 7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ardinghelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  7 04:53:49 np0005549474 systemd[1]: libpod-conmon-f7c9683ac3cc38496de6f6d8b6d73cef8c599298f994f5ab5b5190eb404488cf.scope: Deactivated successfully.
Dec  7 04:53:49 np0005549474 systemd[1]: Started libpod-conmon-7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5.scope.
Dec  7 04:53:49 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:53:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ea6e72532553d7e1dd87526f5aa5956c1ba2d435b80b7341a420934458e273/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ea6e72532553d7e1dd87526f5aa5956c1ba2d435b80b7341a420934458e273/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ea6e72532553d7e1dd87526f5aa5956c1ba2d435b80b7341a420934458e273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:49 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ea6e72532553d7e1dd87526f5aa5956c1ba2d435b80b7341a420934458e273/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:53:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:49 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84780008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:49 np0005549474 podman[165539]: 2025-12-07 09:53:49.347546443 +0000 UTC m=+0.684828785 container init 7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ardinghelli, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:53:49 np0005549474 podman[165539]: 2025-12-07 09:53:49.35778667 +0000 UTC m=+0.695069032 container start 7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 04:53:49 np0005549474 podman[165539]: 2025-12-07 09:53:49.421758419 +0000 UTC m=+0.759040781 container attach 7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ardinghelli, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:53:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:49.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:49 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:53:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:49.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:53:49 np0005549474 lvm[165684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:53:49 np0005549474 lvm[165684]: VG ceph_vg0 finished
Dec  7 04:53:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:49] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:53:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:49] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:53:50 np0005549474 nervous_ardinghelli[165559]: {}
Dec  7 04:53:50 np0005549474 systemd[1]: libpod-7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5.scope: Deactivated successfully.
Dec  7 04:53:50 np0005549474 podman[165539]: 2025-12-07 09:53:50.055068741 +0000 UTC m=+1.392351073 container died 7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:53:50 np0005549474 systemd[1]: libpod-7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5.scope: Consumed 1.047s CPU time.
Dec  7 04:53:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:53:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e2ea6e72532553d7e1dd87526f5aa5956c1ba2d435b80b7341a420934458e273-merged.mount: Deactivated successfully.
Dec  7 04:53:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:50 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:50 np0005549474 podman[165539]: 2025-12-07 09:53:50.891305148 +0000 UTC m=+2.228587480 container remove 7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ardinghelli, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:53:50 np0005549474 systemd[1]: libpod-conmon-7ee6fe20aa2c61511f0aecddaf687d2822e43eff8feba2e5bb1f9975edf9e5f5.scope: Deactivated successfully.
Dec  7 04:53:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:53:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:51 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  7 04:53:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:51.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  7 04:53:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:51 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84780008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:53:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:52 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:53 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:53.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:53 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c002de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:53:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:53.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:53:54 np0005549474 python3.9[165937]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:54 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:54 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:54 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:53:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:53:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:55 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:55.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:55 np0005549474 python3.9[166092]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:53:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:55 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:56 np0005549474 python3.9[166245]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:53:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:56 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0033a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:53:57.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:53:57 np0005549474 python3.9[166398]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:57 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8478002340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:53:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:53:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:57.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:57 np0005549474 python3.9[166553]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:57 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:58 np0005549474 python3.9[166731]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:53:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:53:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:58 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:59 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0033a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:59.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:53:59 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f845c0033a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:53:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:53:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:53:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:53:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:59] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:53:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:53:59] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:00 np0005549474 python3.9[166885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:54:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:54:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:54:00 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:01 np0005549474 podman[166966]: 2025-12-07 09:54:01.264268487 +0000 UTC m=+0.080168649 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  7 04:54:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:54:01 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:01.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:01 np0005549474 python3.9[167067]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:54:01 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:02 np0005549474 python3.9[167219]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:54:02 np0005549474 python3.9[167371]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[144977]: 07/12/2025 09:54:02 : epoch 69354e1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8460003240 fd 38 proxy ignored for local
Dec  7 04:54:02 np0005549474 kernel: ganesha.nfsd[164341]: segfault at 50 ip 00007f8534cbd32e sp 00007f84edffa210 error 4 in libntirpc.so.5.8[7f8534ca2000+2c000] likely on CPU 0 (core 0, socket 0)
Dec  7 04:54:02 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:54:02 np0005549474 systemd[1]: Started Process Core Dump (PID 167396/UID 0).
Dec  7 04:54:03 np0005549474 python3.9[167527]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:03.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:03 np0005549474 systemd-coredump[167398]: Process 144991 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 62:#012#0  0x00007f8534cbd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:54:03 np0005549474 systemd[1]: systemd-coredump@4-167396-0.service: Deactivated successfully.
Dec  7 04:54:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:03 np0005549474 podman[167684]: 2025-12-07 09:54:03.963187713 +0000 UTC m=+0.024682238 container died 9a8778d6ded8c07be14f9e3a22999930c8011d2a1b74258631aab150f24e2bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec  7 04:54:03 np0005549474 python3.9[167680]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5ce87a1684893deca6ff5188c7643cd22449d23b2367189295027fbd88ba15f5-merged.mount: Deactivated successfully.
Dec  7 04:54:04 np0005549474 podman[167684]: 2025-12-07 09:54:04.142571463 +0000 UTC m=+0.204065968 container remove 9a8778d6ded8c07be14f9e3a22999930c8011d2a1b74258631aab150f24e2bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:54:04 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:54:04 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:54:04 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.447s CPU time.
Dec  7 04:54:04 np0005549474 python3.9[167877]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:54:05 np0005549474 python3.9[168030]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:05.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:06 np0005549474 python3.9[168183]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:54:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:54:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:07.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:54:07 np0005549474 podman[168308]: 2025-12-07 09:54:07.031432295 +0000 UTC m=+0.047718611 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  7 04:54:07 np0005549474 python3.9[168355]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:07.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:07 np0005549474 python3.9[168508]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:07.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:08 np0005549474 python3.9[168660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:54:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095408 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:54:09 np0005549474 python3.9[168813]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:09.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:09 np0005549474 python3.9[168966]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:09.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:09] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:09] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:10 np0005549474 python3.9[169118]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:54:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:54:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:11.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:12 np0005549474 python3.9[169272]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:54:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:54:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:54:13 np0005549474 python3.9[169425]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 04:54:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:14 np0005549474 python3.9[169578]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 04:54:14 np0005549474 systemd[1]: Reloading.
Dec  7 04:54:14 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:54:14 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:54:14 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 5.
Dec  7 04:54:14 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:54:14 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.447s CPU time.
Dec  7 04:54:14 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:54:14 np0005549474 podman[169681]: 2025-12-07 09:54:14.572889842 +0000 UTC m=+0.040767133 container create a3f5acc272b9fc46ba38448cfee0bbc3d7abcf3f2edbffed409c8953775061f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:54:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7473196165b89425f4b8174ab2ce3b43032a1bb8475357b94af29d2dc57d9c1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7473196165b89425f4b8174ab2ce3b43032a1bb8475357b94af29d2dc57d9c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7473196165b89425f4b8174ab2ce3b43032a1bb8475357b94af29d2dc57d9c1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7473196165b89425f4b8174ab2ce3b43032a1bb8475357b94af29d2dc57d9c1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:14 np0005549474 podman[169681]: 2025-12-07 09:54:14.635439293 +0000 UTC m=+0.103316604 container init a3f5acc272b9fc46ba38448cfee0bbc3d7abcf3f2edbffed409c8953775061f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 04:54:14 np0005549474 podman[169681]: 2025-12-07 09:54:14.639831872 +0000 UTC m=+0.107709163 container start a3f5acc272b9fc46ba38448cfee0bbc3d7abcf3f2edbffed409c8953775061f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:54:14 np0005549474 bash[169681]: a3f5acc272b9fc46ba38448cfee0bbc3d7abcf3f2edbffed409c8953775061f8
Dec  7 04:54:14 np0005549474 podman[169681]: 2025-12-07 09:54:14.554559166 +0000 UTC m=+0.022436487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:54:14 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:54:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:54:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:54:15 np0005549474 python3.9[169871]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:54:16 np0005549474 python3.9[170027]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:17.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:54:17 np0005549474 python3.9[170182]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:18 np0005549474 python3.9[170335]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:54:18 np0005549474 python3.9[170513]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:19 np0005549474 python3.9[170668]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:19.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:19] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:54:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:19] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:54:20 np0005549474 python3.9[170821]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:54:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:54:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:54:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:54:21 np0005549474 python3.9[170976]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  7 04:54:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:21.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:22 np0005549474 python3.9[171129]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 04:54:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Dec  7 04:54:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:23.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095423 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:54:23 np0005549474 python3.9[171289]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  7 04:54:23 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:54:23 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 04:54:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:23.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  7 04:54:24 np0005549474 python3.9[171450]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:54:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:25.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:25 np0005549474 python3.9[171536]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:54:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:54:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:54:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000011:nfs.cephfs.2: -2
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:54:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:54:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:27.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:54:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:27.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:54:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:27 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f313c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:54:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:54:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:27 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:27.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:54:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:28 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:29 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:29 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:29] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:29] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:54:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:54:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  7 04:54:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095430 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:54:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:30 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3128002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:31 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:31.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:31 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:32 np0005549474 podman[171570]: 2025-12-07 09:54:32.2974833 +0000 UTC m=+0.110934760 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 04:54:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 04:54:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:32 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:33 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:33 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 04:54:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:34 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:35 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:35.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:35 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:54:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:36 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:37.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:54:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:37.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:54:37 np0005549474 podman[171730]: 2025-12-07 09:54:37.265082409 +0000 UTC m=+0.074715887 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  7 04:54:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:37 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:37.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:37 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31340025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:54:38.604 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:54:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:54:38.604 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:54:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:54:38.604 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:54:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:54:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:38 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:39 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:39 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:39] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:39] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 04:54:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:54:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:40 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31340025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:41 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:41.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:41 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:54:42
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', '.nfs', 'volumes']
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:54:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:54:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:54:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:42 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:43 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31340032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:54:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:43.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:54:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:43 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:44.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:44 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 04:54:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:44 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:45 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:45 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:46.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:46 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:47.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:54:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:47.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:54:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:47.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:54:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:47 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:47 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:48.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:48 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:49 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:54:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:54:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:49 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:49] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:54:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:49] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:54:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:50.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:54:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:50 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:51 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:51.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:51 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:52 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:53 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:53 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:54.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:54 np0005549474 kernel: SELinux:  Converting 2776 SID table entries...
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:54:54 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:54:54 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  7 04:54:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 04:54:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:54:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:54 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 04:54:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:55 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:54:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:55.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:54:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:55 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:54:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:56.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 04:54:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 04:54:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:56 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:57.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:54:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:57.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:54:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:54:57.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:57 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:54:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:54:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:57.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:54:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:57 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:54:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:54:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:58.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.414555778 +0000 UTC m=+0.068458584 container create f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.377577505 +0000 UTC m=+0.031480331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:54:58 np0005549474 systemd[1]: Started libpod-conmon-f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8.scope.
Dec  7 04:54:58 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.683966963 +0000 UTC m=+0.337869819 container init f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.691851368 +0000 UTC m=+0.345754174 container start f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.695690168 +0000 UTC m=+0.349593024 container attach f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:54:58 np0005549474 trusting_northcutt[172066]: 167 167
Dec  7 04:54:58 np0005549474 systemd[1]: libpod-f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8.scope: Deactivated successfully.
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.69806077 +0000 UTC m=+0.351963576 container died f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_northcutt, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:54:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:54:58 np0005549474 systemd[1]: var-lib-containers-storage-overlay-60f2c4e49bdb1d679bedb90645ae6224a5fde19169d181fb85c9971d5c584308-merged.mount: Deactivated successfully.
Dec  7 04:54:58 np0005549474 podman[172047]: 2025-12-07 09:54:58.767909158 +0000 UTC m=+0.421811964 container remove f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:54:58 np0005549474 systemd[1]: libpod-conmon-f9f7b53a43ac977e159478ef0622aade4129e965e814a2e954c8e38304444ef8.scope: Deactivated successfully.
Dec  7 04:54:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:58 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:58 np0005549474 podman[172092]: 2025-12-07 09:54:58.984371224 +0000 UTC m=+0.113893576 container create e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec  7 04:54:58 np0005549474 podman[172092]: 2025-12-07 09:54:58.891734452 +0000 UTC m=+0.021256804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:54:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 04:54:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:54:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:54:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:54:59 np0005549474 systemd[1]: Started libpod-conmon-e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201.scope.
Dec  7 04:54:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:54:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95fadb9d5bd24a30acab93da8d911ec3289b53e02168d9d67907aac590a4aff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95fadb9d5bd24a30acab93da8d911ec3289b53e02168d9d67907aac590a4aff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95fadb9d5bd24a30acab93da8d911ec3289b53e02168d9d67907aac590a4aff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95fadb9d5bd24a30acab93da8d911ec3289b53e02168d9d67907aac590a4aff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95fadb9d5bd24a30acab93da8d911ec3289b53e02168d9d67907aac590a4aff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:54:59 np0005549474 podman[172092]: 2025-12-07 09:54:59.078317671 +0000 UTC m=+0.207840043 container init e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 04:54:59 np0005549474 podman[172092]: 2025-12-07 09:54:59.087978982 +0000 UTC m=+0.217501334 container start e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hermann, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:54:59 np0005549474 podman[172092]: 2025-12-07 09:54:59.19001691 +0000 UTC m=+0.319539322 container attach e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:54:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:59 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:59 np0005549474 zen_hermann[172110]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:54:59 np0005549474 zen_hermann[172110]: --> All data devices are unavailable
Dec  7 04:54:59 np0005549474 systemd[1]: libpod-e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201.scope: Deactivated successfully.
Dec  7 04:54:59 np0005549474 podman[172092]: 2025-12-07 09:54:59.419410452 +0000 UTC m=+0.548932814 container died e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hermann, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:54:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d95fadb9d5bd24a30acab93da8d911ec3289b53e02168d9d67907aac590a4aff-merged.mount: Deactivated successfully.
Dec  7 04:54:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:54:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:54:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:59.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:54:59 np0005549474 podman[172092]: 2025-12-07 09:54:59.724293721 +0000 UTC m=+0.853816073 container remove e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 04:54:59 np0005549474 systemd[1]: libpod-conmon-e66bfa6834ad63f76634fb62b97363843bd1f5def5174357721b8d9d34267201.scope: Deactivated successfully.
Dec  7 04:54:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:54:59 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:54:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:59] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:54:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:54:59] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:55:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.34524117 +0000 UTC m=+0.073771262 container create 1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lehmann, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:55:00 np0005549474 systemd[1]: Started libpod-conmon-1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b.scope.
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.294341524 +0000 UTC m=+0.022871616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:55:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.439828203 +0000 UTC m=+0.168358315 container init 1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lehmann, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.447919964 +0000 UTC m=+0.176450096 container start 1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.452216645 +0000 UTC m=+0.180746767 container attach 1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:55:00 np0005549474 amazing_lehmann[172249]: 167 167
Dec  7 04:55:00 np0005549474 systemd[1]: libpod-1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b.scope: Deactivated successfully.
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.454363971 +0000 UTC m=+0.182894063 container died 1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:55:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay-70de3378e222e64d446d313ca6a6ca6ca674b5b70e2f4093ad2bab019ef81c2f-merged.mount: Deactivated successfully.
Dec  7 04:55:00 np0005549474 podman[172232]: 2025-12-07 09:55:00.588494744 +0000 UTC m=+0.317024836 container remove 1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:55:00 np0005549474 systemd[1]: libpod-conmon-1e99caf40060e73b64228a97f2eaaad4d777774fc78e4144c1b3c6a804dda85b.scope: Deactivated successfully.
Dec  7 04:55:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:55:00 np0005549474 podman[172275]: 2025-12-07 09:55:00.792165276 +0000 UTC m=+0.063515624 container create 79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lovelace, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:55:00 np0005549474 systemd[1]: Started libpod-conmon-79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41.scope.
Dec  7 04:55:00 np0005549474 podman[172275]: 2025-12-07 09:55:00.764910397 +0000 UTC m=+0.036260715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:55:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:55:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:00 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb49456dcae0ad42040f1713ee2084c5654a45fa0115775d2c4fd31a6a5e579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb49456dcae0ad42040f1713ee2084c5654a45fa0115775d2c4fd31a6a5e579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb49456dcae0ad42040f1713ee2084c5654a45fa0115775d2c4fd31a6a5e579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb49456dcae0ad42040f1713ee2084c5654a45fa0115775d2c4fd31a6a5e579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:01 np0005549474 podman[172275]: 2025-12-07 09:55:01.03154757 +0000 UTC m=+0.302897838 container init 79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 04:55:01 np0005549474 podman[172275]: 2025-12-07 09:55:01.039426215 +0000 UTC m=+0.310776453 container start 79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lovelace, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 04:55:01 np0005549474 podman[172275]: 2025-12-07 09:55:01.140063516 +0000 UTC m=+0.411413774 container attach 79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]: {
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:    "0": [
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:        {
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "devices": [
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "/dev/loop3"
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            ],
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "lv_name": "ceph_lv0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "lv_size": "21470642176",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "name": "ceph_lv0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "tags": {
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.cluster_name": "ceph",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.crush_device_class": "",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.encrypted": "0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.osd_id": "0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.type": "block",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.vdo": "0",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:                "ceph.with_tpm": "0"
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            },
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "type": "block",
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:            "vg_name": "ceph_vg0"
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:        }
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]:    ]
Dec  7 04:55:01 np0005549474 sweet_lovelace[172293]: }
Dec  7 04:55:01 np0005549474 systemd[1]: libpod-79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41.scope: Deactivated successfully.
Dec  7 04:55:01 np0005549474 podman[172275]: 2025-12-07 09:55:01.312077594 +0000 UTC m=+0.583427842 container died 79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 04:55:01 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0fb49456dcae0ad42040f1713ee2084c5654a45fa0115775d2c4fd31a6a5e579-merged.mount: Deactivated successfully.
Dec  7 04:55:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:01 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:01 np0005549474 podman[172275]: 2025-12-07 09:55:01.439709308 +0000 UTC m=+0.711059546 container remove 79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 04:55:01 np0005549474 systemd[1]: libpod-conmon-79dc6c2bbb481a6cf1aceeb901a2c9ef90872b8c82b854c6f82cae4bc29cbe41.scope: Deactivated successfully.
Dec  7 04:55:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:01.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:01 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:02.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.120499245 +0000 UTC m=+0.078028703 container create 8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_black, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 04:55:02 np0005549474 systemd[1]: Started libpod-conmon-8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac.scope.
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.066093328 +0000 UTC m=+0.023622806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:55:02 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.226781832 +0000 UTC m=+0.184311300 container init 8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_black, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.232316676 +0000 UTC m=+0.189846134 container start 8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:55:02 np0005549474 busy_black[172425]: 167 167
Dec  7 04:55:02 np0005549474 systemd[1]: libpod-8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac.scope: Deactivated successfully.
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.238872897 +0000 UTC m=+0.196402355 container attach 8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.239249106 +0000 UTC m=+0.196778574 container died 8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:55:02 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ac64f62b8af15ce1b144aff5f48f0a90293147e58d5338a9c24776e2b58bf038-merged.mount: Deactivated successfully.
Dec  7 04:55:02 np0005549474 podman[172409]: 2025-12-07 09:55:02.430348422 +0000 UTC m=+0.387877880 container remove 8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_black, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:55:02 np0005549474 systemd[1]: libpod-conmon-8062a56d3fc4c5239c90c81c3eb3d2f12f1a897ff6b5093c420350156216dbac.scope: Deactivated successfully.
Dec  7 04:55:02 np0005549474 podman[172443]: 2025-12-07 09:55:02.514157235 +0000 UTC m=+0.161558558 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  7 04:55:02 np0005549474 podman[172477]: 2025-12-07 09:55:02.608394579 +0000 UTC m=+0.051871792 container create e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_tesla, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:55:02 np0005549474 systemd[1]: Started libpod-conmon-e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df.scope.
Dec  7 04:55:02 np0005549474 podman[172477]: 2025-12-07 09:55:02.583765487 +0000 UTC m=+0.027242720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:55:02 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:55:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30344e485177e4e32fb0cef192f45fc0d2c5631ae04a0c383fd0a18faa2a871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30344e485177e4e32fb0cef192f45fc0d2c5631ae04a0c383fd0a18faa2a871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30344e485177e4e32fb0cef192f45fc0d2c5631ae04a0c383fd0a18faa2a871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30344e485177e4e32fb0cef192f45fc0d2c5631ae04a0c383fd0a18faa2a871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:55:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:02 np0005549474 podman[172477]: 2025-12-07 09:55:02.71101456 +0000 UTC m=+0.154491793 container init e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_tesla, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:55:02 np0005549474 podman[172477]: 2025-12-07 09:55:02.723888006 +0000 UTC m=+0.167365259 container start e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_tesla, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:55:02 np0005549474 podman[172477]: 2025-12-07 09:55:02.82237577 +0000 UTC m=+0.265853003 container attach e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_tesla, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 04:55:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:02 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:03 np0005549474 lvm[172571]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:55:03 np0005549474 lvm[172571]: VG ceph_vg0 finished
Dec  7 04:55:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:03 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:03 np0005549474 adoring_tesla[172494]: {}
Dec  7 04:55:03 np0005549474 systemd[1]: libpod-e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df.scope: Deactivated successfully.
Dec  7 04:55:03 np0005549474 systemd[1]: libpod-e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df.scope: Consumed 1.100s CPU time.
Dec  7 04:55:03 np0005549474 podman[172477]: 2025-12-07 09:55:03.468843903 +0000 UTC m=+0.912321116 container died e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 04:55:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:03.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:03 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:04.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:04 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a30344e485177e4e32fb0cef192f45fc0d2c5631ae04a0c383fd0a18faa2a871-merged.mount: Deactivated successfully.
Dec  7 04:55:05 np0005549474 podman[172477]: 2025-12-07 09:55:05.063915867 +0000 UTC m=+2.507393080 container remove e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 04:55:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:55:05 np0005549474 systemd[1]: libpod-conmon-e76f3aab5a3836000d7fe4fc480ccfbc00f95e0d139c8b6ddacae1a1fc06f8df.scope: Deactivated successfully.
Dec  7 04:55:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:05 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:05 np0005549474 kernel: SELinux:  Converting 2776 SID table entries...
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:55:05 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:55:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:05.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:05 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3128000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:55:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:55:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:06.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:55:06 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  7 04:55:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:06 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:07.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:55:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:07.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:55:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:07.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:55:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:07 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:07 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:55:07 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:55:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:07 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:08.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:08 np0005549474 podman[172623]: 2025-12-07 09:55:08.26929907 +0000 UTC m=+0.077446288 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:55:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:08 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:09 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:09.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:09 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:09] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:55:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:09] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 04:55:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:55:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:10 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:11 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:55:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:11.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:55:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=cleanup t=2025-12-07T09:55:11.740412643Z level=info msg="Completed cleanup jobs" duration=89.283065ms
Dec  7 04:55:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugins.update.checker t=2025-12-07T09:55:11.785833475Z level=info msg="Update check succeeded" duration=54.345576ms
Dec  7 04:55:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana.update.checker t=2025-12-07T09:55:11.787469797Z level=info msg="Update check succeeded" duration=51.224854ms
Dec  7 04:55:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:11 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:55:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:12.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:55:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:55:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:55:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:12 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:13 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:55:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:55:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:13 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280018c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:14 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:15 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:15.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:15 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:16 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:17.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:55:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:17 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:17 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:18.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:18 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:19 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:19.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:19 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:19] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:55:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:19] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:55:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 04:55:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:20.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 04:55:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:55:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:20 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:21 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:21.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:21 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f31280029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:22.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:22 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:23 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:23.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:23 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:55:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:24.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:55:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:24 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3128003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:25 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:25.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:25 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:26.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:26 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:27.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:55:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:27.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:55:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:55:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:55:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:27 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3128003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:27.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:27 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:28.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:28 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:29 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:29 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3128003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:29] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:55:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:29] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:55:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:30.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:55:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:30 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:31 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:31.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:31 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:55:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:32.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:55:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:32 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:33 np0005549474 podman[182667]: 2025-12-07 09:55:33.262612772 +0000 UTC m=+0.078848059 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  7 04:55:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:33 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:55:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:33.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:55:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:33 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:34.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:34 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3128003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:35 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:55:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:55:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:35 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f310c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:36.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:36 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:37.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:55:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:37 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:37 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:38.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:55:38.605 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:55:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:55:38.605 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:55:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:55:38.605 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:55:38 np0005549474 podman[186262]: 2025-12-07 09:55:38.609274706 +0000 UTC m=+0.050313923 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 04:55:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:38 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:39 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:39 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:39] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:55:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:39] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 04:55:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:55:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:40.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:55:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:55:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:40 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:41 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:41 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:42.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:55:42
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.nfs', '.rgw.root']
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:55:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:55:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:55:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:42 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:43 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:43.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:43 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  7 04:55:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:44.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  7 04:55:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:44 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:45 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110002ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:46.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:46 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:46 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:47.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:55:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:47.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:55:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:47 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:47.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:47 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110002ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:48.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:48 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:49 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:49.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:49 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:49] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:55:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:49] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec  7 04:55:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:50.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:55:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:50 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110002ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:51 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118003100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:51.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:51 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:52.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:52 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:53 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110002ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:53 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118003100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:54.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:54 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:55 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:55.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:55 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110002ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:55:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:56.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:56 np0005549474 kernel: SELinux:  Converting 2777 SID table entries...
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability open_perms=1
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability always_check_network=0
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 04:55:56 np0005549474 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 04:55:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:56 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118003100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:57.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:55:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:55:57.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:55:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:55:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:55:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:57 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:57 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:55:57 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  7 04:55:57 np0005549474 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec  7 04:55:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:57.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:57 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:55:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:58.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:55:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:55:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:58 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110004190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:59 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:55:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:55:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:59.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:55:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095559 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:55:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:55:59 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3134004900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:55:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:59] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Dec  7 04:55:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:55:59] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Dec  7 04:56:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:00.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:56:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:56:00 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f311c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:56:01 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3110004190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:01.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:56:01 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:02.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:56:02 np0005549474 kernel: ganesha.nfsd[184541]: segfault at 50 ip 00007f31e79c332e sp 00007f319b7fd210 error 4 in libntirpc.so.5.8[7f31e79a8000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  7 04:56:02 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:56:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[169701]: 07/12/2025 09:56:02 : epoch 69354ec6 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3118004200 fd 38 proxy ignored for local
Dec  7 04:56:02 np0005549474 systemd[1]: Started Process Core Dump (PID 189883/UID 0).
Dec  7 04:56:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:03.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:04.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:04 np0005549474 podman[189886]: 2025-12-07 09:56:04.287671267 +0000 UTC m=+0.104391336 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec  7 04:56:04 np0005549474 systemd-coredump[189884]: Process 169705 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007f31e79c332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:56:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:56:04 np0005549474 systemd[1]: systemd-coredump@5-189883-0.service: Deactivated successfully.
Dec  7 04:56:04 np0005549474 systemd[1]: systemd-coredump@5-189883-0.service: Consumed 1.166s CPU time.
Dec  7 04:56:04 np0005549474 podman[190048]: 2025-12-07 09:56:04.83371274 +0000 UTC m=+0.023922143 container died a3f5acc272b9fc46ba38448cfee0bbc3d7abcf3f2edbffed409c8953775061f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:56:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f7473196165b89425f4b8174ab2ce3b43032a1bb8475357b94af29d2dc57d9c1-merged.mount: Deactivated successfully.
Dec  7 04:56:04 np0005549474 podman[190048]: 2025-12-07 09:56:04.869056423 +0000 UTC m=+0.059265826 container remove a3f5acc272b9fc46ba38448cfee0bbc3d7abcf3f2edbffed409c8953775061f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:56:04 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:56:05 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:56:05 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.525s CPU time.
Dec  7 04:56:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:05.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:05 np0005549474 systemd[1]: Stopping OpenSSH server daemon...
Dec  7 04:56:05 np0005549474 systemd[1]: sshd.service: Deactivated successfully.
Dec  7 04:56:05 np0005549474 systemd[1]: sshd.service: Unit process 189685 (sshd-session) remains running after unit stopped.
Dec  7 04:56:05 np0005549474 systemd[1]: Stopped OpenSSH server daemon.
Dec  7 04:56:05 np0005549474 systemd[1]: sshd.service: Consumed 5.933s CPU time, 39.5M memory peak, read 32.0K from disk, written 48.0K to disk.
Dec  7 04:56:05 np0005549474 systemd[1]: Stopped target sshd-keygen.target.
Dec  7 04:56:05 np0005549474 systemd[1]: Stopping sshd-keygen.target...
Dec  7 04:56:05 np0005549474 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 04:56:05 np0005549474 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 04:56:05 np0005549474 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 04:56:05 np0005549474 systemd[1]: Reached target sshd-keygen.target.
Dec  7 04:56:05 np0005549474 systemd[1]: Starting OpenSSH server daemon...
Dec  7 04:56:05 np0005549474 systemd[1]: Started OpenSSH server daemon.
Dec  7 04:56:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:56:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:06.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:56:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:56:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:07.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:56:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:56:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:56:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:56:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:56:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:56:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:07.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:08.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:56:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:56:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095608 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:56:08 np0005549474 podman[190927]: 2025-12-07 09:56:08.936049059 +0000 UTC m=+0.078983515 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  7 04:56:09 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 04:56:09 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 04:56:09 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:09 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:09 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:09 np0005549474 podman[191109]: 2025-12-07 09:56:09.379671949 +0000 UTC m=+0.082013866 container create f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 04:56:09 np0005549474 podman[191109]: 2025-12-07 09:56:09.320767704 +0000 UTC m=+0.023109641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:09 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 04:56:09 np0005549474 systemd[1]: Started libpod-conmon-f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70.scope.
Dec  7 04:56:09 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:56:09 np0005549474 podman[191109]: 2025-12-07 09:56:09.585573771 +0000 UTC m=+0.287915708 container init f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_nobel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:56:09 np0005549474 podman[191109]: 2025-12-07 09:56:09.592658674 +0000 UTC m=+0.295000601 container start f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:56:09 np0005549474 condescending_nobel[191300]: 167 167
Dec  7 04:56:09 np0005549474 systemd[1]: libpod-f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70.scope: Deactivated successfully.
Dec  7 04:56:09 np0005549474 auditd[706]: Audit daemon rotating log files
Dec  7 04:56:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:09.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:09] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Dec  7 04:56:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:09] "GET /metrics HTTP/1.1" 200 48193 "" "Prometheus/2.51.0"
Dec  7 04:56:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:10.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:10 np0005549474 podman[191109]: 2025-12-07 09:56:10.228737431 +0000 UTC m=+0.931079378 container attach f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 04:56:10 np0005549474 podman[191109]: 2025-12-07 09:56:10.230458508 +0000 UTC m=+0.932800425 container died f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_nobel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 04:56:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:56:10 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6bee65f1701defa2fda382e4765c7b41dd1cec038931670418a5eaf2fe57fc2f-merged.mount: Deactivated successfully.
Dec  7 04:56:10 np0005549474 podman[191109]: 2025-12-07 09:56:10.278320562 +0000 UTC m=+0.980662479 container remove f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:56:10 np0005549474 systemd[1]: libpod-conmon-f7a3274ae3c5ae3f4f115cdbceca9bcc7995a0286510b3591e3649d07bc3bf70.scope: Deactivated successfully.
Dec  7 04:56:10 np0005549474 podman[192237]: 2025-12-07 09:56:10.504059964 +0000 UTC m=+0.087223268 container create 4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 04:56:10 np0005549474 podman[192237]: 2025-12-07 09:56:10.442476376 +0000 UTC m=+0.025639730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:10 np0005549474 systemd[1]: Started libpod-conmon-4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376.scope.
Dec  7 04:56:10 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:56:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e7e2edd705d47477bf8019f03b132cb94108b7e396b12a63549eba7f4fe574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e7e2edd705d47477bf8019f03b132cb94108b7e396b12a63549eba7f4fe574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e7e2edd705d47477bf8019f03b132cb94108b7e396b12a63549eba7f4fe574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e7e2edd705d47477bf8019f03b132cb94108b7e396b12a63549eba7f4fe574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:10 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e7e2edd705d47477bf8019f03b132cb94108b7e396b12a63549eba7f4fe574/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:10 np0005549474 podman[192237]: 2025-12-07 09:56:10.581348701 +0000 UTC m=+0.164512025 container init 4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:56:10 np0005549474 podman[192237]: 2025-12-07 09:56:10.589463463 +0000 UTC m=+0.172626777 container start 4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:56:10 np0005549474 podman[192237]: 2025-12-07 09:56:10.594324644 +0000 UTC m=+0.177487968 container attach 4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:56:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:56:10 np0005549474 great_blackburn[192370]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:56:10 np0005549474 great_blackburn[192370]: --> All data devices are unavailable
Dec  7 04:56:10 np0005549474 systemd[1]: libpod-4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376.scope: Deactivated successfully.
Dec  7 04:56:10 np0005549474 podman[192237]: 2025-12-07 09:56:10.902336839 +0000 UTC m=+0.485500153 container died 4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 04:56:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-14e7e2edd705d47477bf8019f03b132cb94108b7e396b12a63549eba7f4fe574-merged.mount: Deactivated successfully.
Dec  7 04:56:11 np0005549474 podman[192237]: 2025-12-07 09:56:11.198305226 +0000 UTC m=+0.781468520 container remove 4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 04:56:11 np0005549474 systemd[1]: libpod-conmon-4c06383fbe807db007ee511f5c6337e1b1a088fc9fc5e4e4e41735167b0f6376.scope: Deactivated successfully.
Dec  7 04:56:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:11.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:11 np0005549474 podman[193736]: 2025-12-07 09:56:11.703372701 +0000 UTC m=+0.023069059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:11 np0005549474 podman[193736]: 2025-12-07 09:56:11.888646412 +0000 UTC m=+0.208342740 container create fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heyrovsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:56:11 np0005549474 systemd[1]: Started libpod-conmon-fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad.scope.
Dec  7 04:56:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:56:11 np0005549474 podman[193736]: 2025-12-07 09:56:11.978176661 +0000 UTC m=+0.297873039 container init fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heyrovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 04:56:11 np0005549474 podman[193736]: 2025-12-07 09:56:11.983909768 +0000 UTC m=+0.303606096 container start fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:56:11 np0005549474 angry_heyrovsky[194072]: 167 167
Dec  7 04:56:11 np0005549474 systemd[1]: libpod-fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad.scope: Deactivated successfully.
Dec  7 04:56:11 np0005549474 conmon[194072]: conmon fb92498e2798d223fc38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad.scope/container/memory.events
Dec  7 04:56:11 np0005549474 podman[193736]: 2025-12-07 09:56:11.994515526 +0000 UTC m=+0.314211884 container attach fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 04:56:11 np0005549474 podman[193736]: 2025-12-07 09:56:11.995789962 +0000 UTC m=+0.315486310 container died fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:56:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-50b372e98abe7505c160fbd9c4a3ac119e423eff1b382fa616bafa12e2dee120-merged.mount: Deactivated successfully.
Dec  7 04:56:12 np0005549474 podman[193736]: 2025-12-07 09:56:12.071148206 +0000 UTC m=+0.390844534 container remove fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_heyrovsky, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:56:12 np0005549474 systemd[1]: libpod-conmon-fb92498e2798d223fc38dcf22a32f9860dbef3a52d67b141d4798487f9f3bcad.scope: Deactivated successfully.
Dec  7 04:56:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.229156162 +0000 UTC m=+0.042269843 container create 4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jemison, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:56:12 np0005549474 systemd[1]: Started libpod-conmon-4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c.scope.
Dec  7 04:56:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:56:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57bb17e43569dfd70eb8849432927b05b874231456d49ceb8d6bb9e732b5bc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57bb17e43569dfd70eb8849432927b05b874231456d49ceb8d6bb9e732b5bc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57bb17e43569dfd70eb8849432927b05b874231456d49ceb8d6bb9e732b5bc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57bb17e43569dfd70eb8849432927b05b874231456d49ceb8d6bb9e732b5bc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.207613514 +0000 UTC m=+0.020727195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.332693314 +0000 UTC m=+0.145806985 container init 4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.33918072 +0000 UTC m=+0.152294381 container start 4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.344432003 +0000 UTC m=+0.157545674 container attach 4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jemison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:56:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:56:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]: {
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:    "0": [
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:        {
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "devices": [
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "/dev/loop3"
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            ],
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "lv_name": "ceph_lv0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "lv_size": "21470642176",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "name": "ceph_lv0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "tags": {
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.cluster_name": "ceph",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.crush_device_class": "",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.encrypted": "0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.osd_id": "0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.type": "block",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.vdo": "0",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:                "ceph.with_tpm": "0"
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            },
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "type": "block",
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:            "vg_name": "ceph_vg0"
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:        }
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]:    ]
Dec  7 04:56:12 np0005549474 adoring_jemison[194554]: }
Dec  7 04:56:12 np0005549474 systemd[1]: libpod-4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c.scope: Deactivated successfully.
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.616028696 +0000 UTC m=+0.429142357 container died 4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:56:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b57bb17e43569dfd70eb8849432927b05b874231456d49ceb8d6bb9e732b5bc7-merged.mount: Deactivated successfully.
Dec  7 04:56:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Dec  7 04:56:12 np0005549474 podman[194414]: 2025-12-07 09:56:12.73985344 +0000 UTC m=+0.552967091 container remove 4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:56:12 np0005549474 systemd[1]: libpod-conmon-4f2ef1f3a58c896e18a757def86b98b9bd1994b5cb55c6d259197830c898068c.scope: Deactivated successfully.
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.237573506 +0000 UTC m=+0.039317493 container create 31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:56:13 np0005549474 systemd[1]: Started libpod-conmon-31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26.scope.
Dec  7 04:56:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.315735217 +0000 UTC m=+0.117479224 container init 31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.22122798 +0000 UTC m=+0.022971987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.324169896 +0000 UTC m=+0.125913873 container start 31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_gates, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.327032934 +0000 UTC m=+0.128776921 container attach 31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_gates, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 04:56:13 np0005549474 nifty_gates[196021]: 167 167
Dec  7 04:56:13 np0005549474 systemd[1]: libpod-31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26.scope: Deactivated successfully.
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.3301835 +0000 UTC m=+0.131927487 container died 31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:56:13 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ec286e1a205dcf89324c0415a039c1fb8667a319c0026370f6c979a2c25a23e2-merged.mount: Deactivated successfully.
Dec  7 04:56:13 np0005549474 podman[195901]: 2025-12-07 09:56:13.404036993 +0000 UTC m=+0.205780980 container remove 31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_gates, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:56:13 np0005549474 systemd[1]: libpod-conmon-31cc24d5f24a9ffd57beadc4da6dff7804340e4eb5bf1a5c5cf159f27dffac26.scope: Deactivated successfully.
Dec  7 04:56:13 np0005549474 podman[196314]: 2025-12-07 09:56:13.55435692 +0000 UTC m=+0.040077543 container create d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:56:13 np0005549474 systemd[1]: Started libpod-conmon-d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4.scope.
Dec  7 04:56:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:56:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85a662fd882713fef8404fd7e86cd0f34ec61a560b2499197622c311a9c9bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85a662fd882713fef8404fd7e86cd0f34ec61a560b2499197622c311a9c9bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85a662fd882713fef8404fd7e86cd0f34ec61a560b2499197622c311a9c9bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85a662fd882713fef8404fd7e86cd0f34ec61a560b2499197622c311a9c9bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:13 np0005549474 podman[196314]: 2025-12-07 09:56:13.627622237 +0000 UTC m=+0.113342890 container init d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Dec  7 04:56:13 np0005549474 podman[196314]: 2025-12-07 09:56:13.537046678 +0000 UTC m=+0.022767321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:13 np0005549474 podman[196314]: 2025-12-07 09:56:13.634974387 +0000 UTC m=+0.120695010 container start d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:56:13 np0005549474 podman[196314]: 2025-12-07 09:56:13.639069479 +0000 UTC m=+0.124790102 container attach d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 04:56:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:13.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:14 np0005549474 lvm[197396]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:56:14 np0005549474 lvm[197396]: VG ceph_vg0 finished
Dec  7 04:56:14 np0005549474 epic_jemison[196470]: {}
Dec  7 04:56:14 np0005549474 systemd[1]: libpod-d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4.scope: Deactivated successfully.
Dec  7 04:56:14 np0005549474 podman[196314]: 2025-12-07 09:56:14.310992652 +0000 UTC m=+0.796713275 container died d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:56:14 np0005549474 systemd[1]: libpod-d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4.scope: Consumed 1.001s CPU time.
Dec  7 04:56:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e85a662fd882713fef8404fd7e86cd0f34ec61a560b2499197622c311a9c9bda-merged.mount: Deactivated successfully.
Dec  7 04:56:14 np0005549474 podman[196314]: 2025-12-07 09:56:14.360353198 +0000 UTC m=+0.846073821 container remove d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:56:14 np0005549474 systemd[1]: libpod-conmon-d6a69d602078657a1e9048f7b55dd55679f126fdd7544b33fa77eea81e0122b4.scope: Deactivated successfully.
Dec  7 04:56:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:56:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:56:15 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 6.
Dec  7 04:56:15 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:56:15 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.525s CPU time.
Dec  7 04:56:15 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:56:15 np0005549474 podman[198699]: 2025-12-07 09:56:15.350918215 +0000 UTC m=+0.042349805 container create 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:56:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad500ab7c37b37093bb23c6dbe26910d6e8403f74bd6e3b1691a23d9f8f84ca7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad500ab7c37b37093bb23c6dbe26910d6e8403f74bd6e3b1691a23d9f8f84ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad500ab7c37b37093bb23c6dbe26910d6e8403f74bd6e3b1691a23d9f8f84ca7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad500ab7c37b37093bb23c6dbe26910d6e8403f74bd6e3b1691a23d9f8f84ca7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:56:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:56:15 np0005549474 podman[198699]: 2025-12-07 09:56:15.406667854 +0000 UTC m=+0.098099464 container init 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:56:15 np0005549474 podman[198699]: 2025-12-07 09:56:15.416867223 +0000 UTC m=+0.108298813 container start 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 04:56:15 np0005549474 bash[198699]: 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80
Dec  7 04:56:15 np0005549474 podman[198699]: 2025-12-07 09:56:15.329039999 +0000 UTC m=+0.020471609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:56:15 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:56:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:56:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:16 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:16 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:56:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:56:16 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 04:56:16 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 04:56:16 np0005549474 systemd[1]: man-db-cache-update.service: Consumed 9.375s CPU time.
Dec  7 04:56:16 np0005549474 systemd[1]: run-rf35ffd371d92444f823845650eb11a74.service: Deactivated successfully.
Dec  7 04:56:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:56:17 np0005549474 python3.9[200262]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:56:17 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:17 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:17 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:17.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:18.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:18 np0005549474 python3.9[200454]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:56:18 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:18 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:18 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 04:56:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:19.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:19 np0005549474 python3.9[200671]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:56:19 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:19 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:19 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:19] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:56:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:19] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:56:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:20.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  7 04:56:20 np0005549474 python3.9[200860]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:56:20 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:21 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:21 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:56:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:56:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:22 np0005549474 python3.9[201051]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:22 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:22 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:22 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 04:56:23 np0005549474 python3.9[201242]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:23 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:23 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:23 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095623 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:56:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:24 np0005549474 python3.9[201433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:24 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:24 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:24 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  7 04:56:25 np0005549474 python3.9[201625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:25.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:26.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:26 np0005549474 python3.9[201780]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:26 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:26 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:26 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:56:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:56:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000013:nfs.cephfs.2: -2
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:56:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:27.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  7 04:56:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:28 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:29 np0005549474 python3.9[201989]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 04:56:29 np0005549474 systemd[1]: Reloading.
Dec  7 04:56:29 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:56:29 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:56:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:29.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:29 np0005549474 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  7 04:56:29 np0005549474 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  7 04:56:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:56:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:56:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:30.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 04:56:30 np0005549474 python3.9[202182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095630 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:56:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:30 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:31 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:31 np0005549474 python3.9[202339]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:31.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:31 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:32.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:32 np0005549474 python3.9[202496]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 04:56:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:32 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:33 np0005549474 python3.9[202652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:33 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:33.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:33 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:34 np0005549474 python3.9[202808]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:34 np0005549474 podman[202935]: 2025-12-07 09:56:34.668450181 +0000 UTC m=+0.133621973 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  7 04:56:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 04:56:34 np0005549474 python3.9[202982]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:34 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:35 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:35 np0005549474 python3.9[203146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:35 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:36.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:36 np0005549474 python3.9[203301]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:56:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:36 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:37.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:56:37 np0005549474 python3.9[203457]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:37 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:37 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:38 np0005549474 python3.9[203613]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:38.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:56:38.607 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:56:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:56:38.608 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:56:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:56:38.608 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:56:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:56:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:38 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:38 np0005549474 python3.9[203768]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:39 np0005549474 podman[203771]: 2025-12-07 09:56:39.057876734 +0000 UTC m=+0.052944704 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  7 04:56:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:39 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:39.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:39 np0005549474 python3.9[203968]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:39 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:56:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:56:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:40.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:40 np0005549474 python3.9[204123]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Dec  7 04:56:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:40 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:41 np0005549474 python3.9[204279]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  7 04:56:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:41 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:41.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:41 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:42.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:56:42
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', '.nfs', 'volumes', 'cephfs.cephfs.data', 'images']
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:56:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:56:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:56:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:42 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:43 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:43.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:43 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:44.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:44 np0005549474 python3.9[204437]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:56:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:44 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:45 np0005549474 python3.9[204590]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:56:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:45 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:45 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:46 np0005549474 python3.9[204743]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:56:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:46.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:46 np0005549474 python3.9[204895]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:56:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:46 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:47.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:56:47 np0005549474 python3.9[205049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:56:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:47 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:47.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:47 np0005549474 python3.9[205201]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:56:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:47 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:48.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:48 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:49 np0005549474 python3.9[205355]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:49 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:49.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:49 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:49] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 04:56:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:49] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 04:56:50 np0005549474 python3.9[205480]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101408.7558036-1622-46970132880692/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:56:50 np0005549474 python3.9[205632]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:50 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:51 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:51 np0005549474 python3.9[205759]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101410.2706394-1622-24389770936417/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:51 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:52 np0005549474 python3.9[205911]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:52.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:52 np0005549474 python3.9[206036]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101411.7212312-1622-53926831743581/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:52 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:53 np0005549474 python3.9[206190]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:53 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:53 np0005549474 python3.9[206315]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101412.861108-1622-112584419583539/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:53 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:54.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:54 np0005549474 python3.9[206467]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:54 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:55 np0005549474 python3.9[206593]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101414.095572-1622-55450488994537/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:55 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:55.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:55 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:56:56 np0005549474 python3.9[206746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:56.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:56 np0005549474 python3.9[206871]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101415.3321033-1622-27819921733315/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:56 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:56:57.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:56:57 np0005549474 python3.9[207025]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:56:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:56:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:57 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:56:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:57.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:56:57 np0005549474 python3.9[207148]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101416.8927007-1622-61436918952187/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:57 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 04:56:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:58.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 04:56:58 np0005549474 python3.9[207301]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:56:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:56:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:58 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:59 np0005549474 python3.9[207427]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101418.0745106-1622-278847056445684/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:56:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:59 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:56:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:56:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:56:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:56:59 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:56:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:59] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:56:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:56:59] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:57:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:00.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:57:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:00 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:01 np0005549474 python3.9[207609]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  7 04:57:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:01 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5690000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:01.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:01 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:57:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:02.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:57:02 np0005549474 python3.9[207762]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:02 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:02 np0005549474 python3.9[207914]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:03 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:03 np0005549474 python3.9[208068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:03 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:04 np0005549474 python3.9[208220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:04 np0005549474 podman[208344]: 2025-12-07 09:57:04.892691877 +0000 UTC m=+0.127932128 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  7 04:57:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:04 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:04 np0005549474 python3.9[208390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:05 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:05 np0005549474 python3.9[208552]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:05.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:05 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:06.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:06 np0005549474 python3.9[208704]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:06 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:07.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:57:07 np0005549474 python3.9[208857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:07 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:07.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:07 np0005549474 python3.9[209010]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:07 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:08.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:08 np0005549474 python3.9[209162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:08 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:08 np0005549474 python3.9[209315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:09 np0005549474 podman[209325]: 2025-12-07 09:57:09.263313598 +0000 UTC m=+0.072715943 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  7 04:57:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:09 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:09 np0005549474 python3.9[209488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:09 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:09] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:57:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:09] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:57:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:10.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:10 np0005549474 python3.9[209640]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 04:57:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:10 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:10 np0005549474 python3.9[209792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:11 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:11 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:57:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:57:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:12 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:13 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:13 np0005549474 python3.9[209950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:13.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:13 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:14.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:14 np0005549474 python3.9[210073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101433.1134965-2285-15885621933026/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:14 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5690002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:15 np0005549474 python3.9[210226]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:15 np0005549474 python3.9[210350]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101434.6489384-2285-206781321365003/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:16 np0005549474 python3.9[210563]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:16 np0005549474 podman[210628]: 2025-12-07 09:57:16.495965244 +0000 UTC m=+0.064895630 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:57:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095716 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:57:16 np0005549474 podman[210628]: 2025-12-07 09:57:16.667416746 +0000 UTC m=+0.236347122 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 04:57:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:16 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:16 np0005549474 python3.9[210804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101435.8925123-2285-157373462141988/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:57:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:17.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:57:17 np0005549474 podman[210895]: 2025-12-07 09:57:17.154753039 +0000 UTC m=+0.053604212 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:57:17 np0005549474 podman[210895]: 2025-12-07 09:57:17.189588929 +0000 UTC m=+0.088440072 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:57:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:17 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5690003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:17 np0005549474 podman[211100]: 2025-12-07 09:57:17.563371666 +0000 UTC m=+0.091685361 container exec 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:17 np0005549474 podman[211100]: 2025-12-07 09:57:17.577573763 +0000 UTC m=+0.105887358 container exec_died 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:57:17 np0005549474 python3.9[211129]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:17 np0005549474 podman[211184]: 2025-12-07 09:57:17.81892177 +0000 UTC m=+0.053245392 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:57:17 np0005549474 podman[211184]: 2025-12-07 09:57:17.832662635 +0000 UTC m=+0.066986247 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 04:57:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:17 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:18 np0005549474 podman[211319]: 2025-12-07 09:57:18.043049969 +0000 UTC m=+0.049708185 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., release=1793, vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  7 04:57:18 np0005549474 podman[211319]: 2025-12-07 09:57:18.064579896 +0000 UTC m=+0.071238112 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=)
Dec  7 04:57:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:18.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:18 np0005549474 podman[211436]: 2025-12-07 09:57:18.285280221 +0000 UTC m=+0.059974246 container exec d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:57:18 np0005549474 podman[211436]: 2025-12-07 09:57:18.323605596 +0000 UTC m=+0.098299621 container exec_died d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:57:18 np0005549474 python3.9[211407]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101437.1752808-2285-152762311385694/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:18 np0005549474 podman[211541]: 2025-12-07 09:57:18.557302385 +0000 UTC m=+0.057549180 container exec d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:57:18 np0005549474 podman[211541]: 2025-12-07 09:57:18.734247748 +0000 UTC m=+0.234494543 container exec_died d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 04:57:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:18 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:18 np0005549474 python3.9[211720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:19 np0005549474 podman[211777]: 2025-12-07 09:57:19.08143728 +0000 UTC m=+0.048990605 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:57:19 np0005549474 podman[211777]: 2025-12-07 09:57:19.10856151 +0000 UTC m=+0.076114815 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:19 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:19 np0005549474 python3.9[212017]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101438.523251-2285-101401538140607/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:19.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:57:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:57:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:19 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5690003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:19] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:57:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:19] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:57:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:57:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:57:20 np0005549474 python3.9[212250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:20.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.337326389 +0000 UTC m=+0.037169964 container create a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:57:20 np0005549474 systemd[1]: Started libpod-conmon-a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc.scope.
Dec  7 04:57:20 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.394739714 +0000 UTC m=+0.094583289 container init a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_feistel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.402036413 +0000 UTC m=+0.101879988 container start a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.405078736 +0000 UTC m=+0.104922321 container attach a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_feistel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 04:57:20 np0005549474 nifty_feistel[212354]: 167 167
Dec  7 04:57:20 np0005549474 systemd[1]: libpod-a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc.scope: Deactivated successfully.
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.406476034 +0000 UTC m=+0.106319609 container died a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_feistel, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.322164766 +0000 UTC m=+0.022008371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:57:20 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2740a084f4a9901428a814718f8872acffb1e3910deb6fc1b2bd67f4bfd80c0d-merged.mount: Deactivated successfully.
Dec  7 04:57:20 np0005549474 podman[212303]: 2025-12-07 09:57:20.441792637 +0000 UTC m=+0.141636212 container remove a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_feistel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:57:20 np0005549474 systemd[1]: libpod-conmon-a4fcf8c11df10ef8cad7acfb4e07ae0df49d50782e953565a86f109a32d152bc.scope: Deactivated successfully.
Dec  7 04:57:20 np0005549474 podman[212453]: 2025-12-07 09:57:20.591880907 +0000 UTC m=+0.038992884 container create 92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 04:57:20 np0005549474 systemd[1]: Started libpod-conmon-92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0.scope.
Dec  7 04:57:20 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:57:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1689a4d43699b9663cb86306c5a3a78456460b10512b0106e2a44b026b53afba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1689a4d43699b9663cb86306c5a3a78456460b10512b0106e2a44b026b53afba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1689a4d43699b9663cb86306c5a3a78456460b10512b0106e2a44b026b53afba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1689a4d43699b9663cb86306c5a3a78456460b10512b0106e2a44b026b53afba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1689a4d43699b9663cb86306c5a3a78456460b10512b0106e2a44b026b53afba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:20 np0005549474 podman[212453]: 2025-12-07 09:57:20.57399425 +0000 UTC m=+0.021106257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:57:20 np0005549474 podman[212453]: 2025-12-07 09:57:20.673965485 +0000 UTC m=+0.121077482 container init 92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_clarke, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:57:20 np0005549474 podman[212453]: 2025-12-07 09:57:20.68371721 +0000 UTC m=+0.130829187 container start 92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:57:20 np0005549474 podman[212453]: 2025-12-07 09:57:20.687450092 +0000 UTC m=+0.134562069 container attach 92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_clarke, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 04:57:20 np0005549474 python3.9[212452]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101439.6933737-2285-94534700983747/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:57:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:20 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:21 np0005549474 elated_clarke[212469]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:57:21 np0005549474 elated_clarke[212469]: --> All data devices are unavailable
Dec  7 04:57:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:21 np0005549474 systemd[1]: libpod-92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0.scope: Deactivated successfully.
Dec  7 04:57:21 np0005549474 podman[212453]: 2025-12-07 09:57:21.025611159 +0000 UTC m=+0.472723146 container died 92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_clarke, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:57:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1689a4d43699b9663cb86306c5a3a78456460b10512b0106e2a44b026b53afba-merged.mount: Deactivated successfully.
Dec  7 04:57:21 np0005549474 podman[212453]: 2025-12-07 09:57:21.061944529 +0000 UTC m=+0.509056506 container remove 92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_clarke, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:57:21 np0005549474 systemd[1]: libpod-conmon-92cd6eb2c63bad1de8b972fe442d830699419add6f83a7d7b42f2293fbf80ab0.scope: Deactivated successfully.
Dec  7 04:57:21 np0005549474 python3.9[212668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.596225561 +0000 UTC m=+0.043142498 container create b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:57:21 np0005549474 systemd[1]: Started libpod-conmon-b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679.scope.
Dec  7 04:57:21 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.575262199 +0000 UTC m=+0.022179126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.675596314 +0000 UTC m=+0.122513241 container init b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_elion, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.686797859 +0000 UTC m=+0.133714786 container start b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_elion, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 04:57:21 np0005549474 nice_elion[212860]: 167 167
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.691290132 +0000 UTC m=+0.138207069 container attach b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:21 np0005549474 systemd[1]: libpod-b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679.scope: Deactivated successfully.
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.692037952 +0000 UTC m=+0.138954869 container died b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Dec  7 04:57:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay-64bc3c148930e546a86757396fdb9ac124598191aa5e3afb8c127b5cc0672e96-merged.mount: Deactivated successfully.
Dec  7 04:57:21 np0005549474 podman[212809]: 2025-12-07 09:57:21.728546207 +0000 UTC m=+0.175463104 container remove b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:57:21 np0005549474 systemd[1]: libpod-conmon-b1f7a431d06d9714dd50244c357c40ebd1a6c8dadeae77b70af87618099aa679.scope: Deactivated successfully.
Dec  7 04:57:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:21.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:21 np0005549474 python3.9[212882]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101440.8707893-2285-135287705062841/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:21 np0005549474 podman[212903]: 2025-12-07 09:57:21.943556097 +0000 UTC m=+0.052971424 container create 737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 04:57:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:21 np0005549474 systemd[1]: Started libpod-conmon-737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d.scope.
Dec  7 04:57:22 np0005549474 podman[212903]: 2025-12-07 09:57:21.919754928 +0000 UTC m=+0.029170265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:57:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:57:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5968bc918df77889e3628592c56ef0c2d8d85f62886199bf02f8589411adf2cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5968bc918df77889e3628592c56ef0c2d8d85f62886199bf02f8589411adf2cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5968bc918df77889e3628592c56ef0c2d8d85f62886199bf02f8589411adf2cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5968bc918df77889e3628592c56ef0c2d8d85f62886199bf02f8589411adf2cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:22 np0005549474 podman[212903]: 2025-12-07 09:57:22.032293785 +0000 UTC m=+0.141709092 container init 737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:57:22 np0005549474 podman[212903]: 2025-12-07 09:57:22.038714431 +0000 UTC m=+0.148129748 container start 737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:57:22 np0005549474 podman[212903]: 2025-12-07 09:57:22.04273488 +0000 UTC m=+0.152150207 container attach 737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shamir, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]: {
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:    "0": [
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:        {
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "devices": [
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "/dev/loop3"
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            ],
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "lv_name": "ceph_lv0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "lv_size": "21470642176",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "name": "ceph_lv0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "tags": {
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.cluster_name": "ceph",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.crush_device_class": "",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.encrypted": "0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.osd_id": "0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.type": "block",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.vdo": "0",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:                "ceph.with_tpm": "0"
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            },
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "type": "block",
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:            "vg_name": "ceph_vg0"
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:        }
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]:    ]
Dec  7 04:57:22 np0005549474 xenodochial_shamir[212943]: }
Dec  7 04:57:22 np0005549474 systemd[1]: libpod-737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d.scope: Deactivated successfully.
Dec  7 04:57:22 np0005549474 podman[212903]: 2025-12-07 09:57:22.388375671 +0000 UTC m=+0.497791018 container died 737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:57:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5968bc918df77889e3628592c56ef0c2d8d85f62886199bf02f8589411adf2cf-merged.mount: Deactivated successfully.
Dec  7 04:57:22 np0005549474 podman[212903]: 2025-12-07 09:57:22.434487797 +0000 UTC m=+0.543903104 container remove 737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:57:22 np0005549474 systemd[1]: libpod-conmon-737dbaf008400c49ae8bd4b37b586f344a58e45e0d2986a18d80b1b2db037a5d.scope: Deactivated successfully.
Dec  7 04:57:22 np0005549474 python3.9[213079]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:57:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:22 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5690003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.046799446 +0000 UTC m=+0.042390097 container create 62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 04:57:23 np0005549474 systemd[1]: Started libpod-conmon-62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3.scope.
Dec  7 04:57:23 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.030933613 +0000 UTC m=+0.026524274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.135620536 +0000 UTC m=+0.131211197 container init 62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_antonelli, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.145978859 +0000 UTC m=+0.141569510 container start 62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_antonelli, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.149679789 +0000 UTC m=+0.145270500 container attach 62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:57:23 np0005549474 bold_antonelli[213292]: 167 167
Dec  7 04:57:23 np0005549474 systemd[1]: libpod-62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3.scope: Deactivated successfully.
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.150698948 +0000 UTC m=+0.146289589 container died 62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 04:57:23 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6d2bee0f0f576bb1473b994fcf36bebf069db8110f60c10725a220ab7f5a4e07-merged.mount: Deactivated successfully.
Dec  7 04:57:23 np0005549474 podman[213251]: 2025-12-07 09:57:23.189816284 +0000 UTC m=+0.185406925 container remove 62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:57:23 np0005549474 systemd[1]: libpod-conmon-62d9e8fc6811eee2417514d36d109ade047849e1c97992ade76a2d7f54f819e3.scope: Deactivated successfully.
Dec  7 04:57:23 np0005549474 python3.9[213324]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101442.049386-2285-80591035371339/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:23 np0005549474 podman[213346]: 2025-12-07 09:57:23.385910749 +0000 UTC m=+0.061132178 container create d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chatelet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:23 np0005549474 systemd[1]: Started libpod-conmon-d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd.scope.
Dec  7 04:57:23 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:57:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c038dc047a3f4ed6d813f72a8568c7d40d9ecf1b213f26c399d5d953afa455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c038dc047a3f4ed6d813f72a8568c7d40d9ecf1b213f26c399d5d953afa455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c038dc047a3f4ed6d813f72a8568c7d40d9ecf1b213f26c399d5d953afa455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c038dc047a3f4ed6d813f72a8568c7d40d9ecf1b213f26c399d5d953afa455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:57:23 np0005549474 podman[213346]: 2025-12-07 09:57:23.357482853 +0000 UTC m=+0.032704332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:57:23 np0005549474 podman[213346]: 2025-12-07 09:57:23.466100774 +0000 UTC m=+0.141322203 container init d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:57:23 np0005549474 podman[213346]: 2025-12-07 09:57:23.477151955 +0000 UTC m=+0.152373384 container start d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:57:23 np0005549474 podman[213346]: 2025-12-07 09:57:23.480378723 +0000 UTC m=+0.155600142 container attach d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:23 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:23.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:23 np0005549474 python3.9[213539]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:23 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:24 np0005549474 lvm[213623]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:57:24 np0005549474 lvm[213623]: VG ceph_vg0 finished
Dec  7 04:57:24 np0005549474 nostalgic_chatelet[213380]: {}
Dec  7 04:57:24 np0005549474 systemd[1]: libpod-d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd.scope: Deactivated successfully.
Dec  7 04:57:24 np0005549474 systemd[1]: libpod-d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd.scope: Consumed 1.046s CPU time.
Dec  7 04:57:24 np0005549474 podman[213346]: 2025-12-07 09:57:24.195493904 +0000 UTC m=+0.870715363 container died d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:57:24 np0005549474 systemd[1]: var-lib-containers-storage-overlay-49c038dc047a3f4ed6d813f72a8568c7d40d9ecf1b213f26c399d5d953afa455-merged.mount: Deactivated successfully.
Dec  7 04:57:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:24 np0005549474 podman[213346]: 2025-12-07 09:57:24.243834731 +0000 UTC m=+0.919056150 container remove d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:57:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:24 np0005549474 systemd[1]: libpod-conmon-d8e21d1f659a27fa4db274098417f543b171a40220aa67f0bbaab0190dbceddd.scope: Deactivated successfully.
Dec  7 04:57:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:57:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:57:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:24 np0005549474 python3.9[213750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101443.5237575-2285-173251026929524/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:57:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:24 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:25 np0005549474 python3.9[213905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:57:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:25 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:25.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:25 np0005549474 python3.9[214029]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101444.7183325-2285-163801777084270/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:25 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00043a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:26.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:26 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:57:26 np0005549474 python3.9[214181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:57:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:26 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:26 np0005549474 python3.9[214305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101445.9688551-2285-88514817273813/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:27.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:57:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:57:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:57:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:27 np0005549474 python3.9[214458]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:28.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:28 np0005549474 python3.9[214581]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101447.15503-2285-221763170154733/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec  7 04:57:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:28 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00043c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:28 np0005549474 python3.9[214733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:57:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:57:29 np0005549474 python3.9[214858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101448.476439-2285-46813043240500/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:29.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:57:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:57:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:30 np0005549474 python3.9[215010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:30.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:30 np0005549474 python3.9[215133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101449.6622136-2285-199041061180410/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 04:57:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:30 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:31 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00043e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:31.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:31 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:32 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:57:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 04:57:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:32 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:33 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:33 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00043e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:34.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:34 np0005549474 python3.9[215290]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:57:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:57:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:34 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:35 np0005549474 podman[215372]: 2025-12-07 09:57:35.308902909 +0000 UTC m=+0.111729956 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Dec  7 04:57:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:35 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:35 np0005549474 python3.9[215473]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  7 04:57:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:36 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c001a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 04:57:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:36 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:37.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:57:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:37.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:57:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:37.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:57:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:37 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:37 np0005549474 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  7 04:57:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:57:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:37.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:57:37 np0005549474 python3.9[215631]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:38 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:38.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095738 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:57:38 np0005549474 python3.9[215783]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:57:38.609 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:57:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:57:38.609 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:57:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:57:38.609 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:57:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 04:57:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:38 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c001a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:39 np0005549474 python3.9[215939]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:39 np0005549474 podman[215964]: 2025-12-07 09:57:39.399317662 +0000 UTC m=+0.058085544 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  7 04:57:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:39 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c001a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:39 np0005549474 python3.9[216136]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:57:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:57:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:40 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:57:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:57:40 np0005549474 python3.9[216288]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 2 op/s
Dec  7 04:57:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:40 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:41 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:41.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:41 np0005549474 python3.9[216442]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:42 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:57:42
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', '.nfs', 'volumes', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:57:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:57:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:57:42 np0005549474 python3.9[216594]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:57:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:57:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:42 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:43 np0005549474 python3.9[216747]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:43 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:43 np0005549474 python3.9[216900]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:44 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:44 np0005549474 python3.9[217052]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:57:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:44 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:45 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:46 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:46 np0005549474 python3.9[217206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:57:46 np0005549474 systemd[1]: Reloading.
Dec  7 04:57:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:46.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:46 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:57:46 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:57:46 np0005549474 systemd[1]: Starting libvirt logging daemon socket...
Dec  7 04:57:46 np0005549474 systemd[1]: Listening on libvirt logging daemon socket.
Dec  7 04:57:46 np0005549474 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  7 04:57:46 np0005549474 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  7 04:57:46 np0005549474 systemd[1]: Starting libvirt logging daemon...
Dec  7 04:57:46 np0005549474 systemd[1]: Started libvirt logging daemon.
Dec  7 04:57:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:57:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:46 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:47.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:57:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:47 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:47 np0005549474 python3.9[217401]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:57:47 np0005549474 systemd[1]: Reloading.
Dec  7 04:57:47 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:57:47 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:57:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:47.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:47 np0005549474 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  7 04:57:47 np0005549474 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  7 04:57:47 np0005549474 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  7 04:57:47 np0005549474 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  7 04:57:47 np0005549474 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  7 04:57:47 np0005549474 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  7 04:57:47 np0005549474 systemd[1]: Starting libvirt nodedev daemon...
Dec  7 04:57:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:48 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:48 np0005549474 systemd[1]: Started libvirt nodedev daemon.
Dec  7 04:57:48 np0005549474 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  7 04:57:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:48.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:48 np0005549474 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  7 04:57:48 np0005549474 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  7 04:57:48 np0005549474 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  7 04:57:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:57:48 np0005549474 python3.9[217625]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:57:48 np0005549474 systemd[1]: Reloading.
Dec  7 04:57:48 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:57:48 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:57:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:48 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:49 np0005549474 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  7 04:57:49 np0005549474 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  7 04:57:49 np0005549474 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  7 04:57:49 np0005549474 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  7 04:57:49 np0005549474 systemd[1]: Starting libvirt proxy daemon...
Dec  7 04:57:49 np0005549474 systemd[1]: Started libvirt proxy daemon.
Dec  7 04:57:49 np0005549474 setroubleshoot[217462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e83885a0-2fad-4cc0-a5e3-a011b4963539
Dec  7 04:57:49 np0005549474 setroubleshoot[217462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  7 04:57:49 np0005549474 setroubleshoot[217462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e83885a0-2fad-4cc0-a5e3-a011b4963539
Dec  7 04:57:49 np0005549474 setroubleshoot[217462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  7 04:57:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:49 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:49.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:49] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Dec  7 04:57:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:49] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Dec  7 04:57:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:50 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:50 np0005549474 python3.9[217842]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:57:50 np0005549474 systemd[1]: Reloading.
Dec  7 04:57:50 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:57:50 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:57:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:50.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:50 np0005549474 systemd[1]: Listening on libvirt locking daemon socket.
Dec  7 04:57:50 np0005549474 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  7 04:57:50 np0005549474 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  7 04:57:50 np0005549474 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  7 04:57:50 np0005549474 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  7 04:57:50 np0005549474 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  7 04:57:50 np0005549474 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  7 04:57:50 np0005549474 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  7 04:57:50 np0005549474 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  7 04:57:50 np0005549474 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  7 04:57:50 np0005549474 systemd[1]: Starting libvirt QEMU daemon...
Dec  7 04:57:50 np0005549474 systemd[1]: Started libvirt QEMU daemon.
Dec  7 04:57:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:57:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:50 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:51 np0005549474 python3.9[218059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:57:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:51 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:51 np0005549474 systemd[1]: Reloading.
Dec  7 04:57:51 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:57:51 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:57:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:51.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:51 np0005549474 systemd[1]: Starting libvirt secret daemon socket...
Dec  7 04:57:51 np0005549474 systemd[1]: Listening on libvirt secret daemon socket.
Dec  7 04:57:51 np0005549474 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  7 04:57:51 np0005549474 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  7 04:57:51 np0005549474 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  7 04:57:51 np0005549474 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  7 04:57:51 np0005549474 systemd[1]: Starting libvirt secret daemon...
Dec  7 04:57:51 np0005549474 systemd[1]: Started libvirt secret daemon.
Dec  7 04:57:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:52 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:52.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:52 np0005549474 python3.9[218271]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:52 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:53 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:53 np0005549474 python3.9[218425]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 04:57:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:54 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:57:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:57:54 np0005549474 python3.9[218577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:57:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:57:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:54 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:55 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:55 np0005549474 python3.9[218733]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 04:57:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:55.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:56 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:57:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:56.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:56 np0005549474 python3.9[218883]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:57:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:56 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:57.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:57:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:57:57.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:57:57 np0005549474 python3.9[219005]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101476.2152617-3359-4537615051604/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ec35f87f58a946e19c403a490b743bca3d89a26e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:57:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:57:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:57 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:57:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:57.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:57:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:58 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:58 np0005549474 python3.9[219158]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 75f4c9fd-539a-5e17-b55a-0a12a4e2736c#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:57:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:57:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:58.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:57:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:57:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:58 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:59 np0005549474 python3.9[219321]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:57:59 np0005549474 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  7 04:57:59 np0005549474 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  7 04:57:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:57:59 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:57:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:57:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:57:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:59.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:57:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:59] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:57:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:57:59] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:58:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:00 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:58:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:00 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:01 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:01 np0005549474 python3.9[219814]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:02 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:58:02 np0005549474 python3.9[219966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:02 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:03 np0005549474 python3.9[220091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101482.2559056-3524-40746672910192/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:03 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:03.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:04 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:04 np0005549474 python3.9[220243]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 04:58:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:04 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:05 np0005549474 python3.9[220396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:05 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:05 np0005549474 podman[220447]: 2025-12-07 09:58:05.619171853 +0000 UTC m=+0.112079840 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  7 04:58:05 np0005549474 python3.9[220483]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:06 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:58:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:58:06 np0005549474 python3.9[220653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:58:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:06 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:07.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:58:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:07.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:58:07 np0005549474 python3.9[220732]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.20bo3s1q recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:07 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:08 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:08 np0005549474 python3.9[220885]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095808 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:58:08 np0005549474 python3.9[220963]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:58:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:08 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:09 np0005549474 python3.9[221117]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:58:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:09 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:09] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:58:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:09] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:58:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:10 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:10 np0005549474 podman[221196]: 2025-12-07 09:58:10.260927715 +0000 UTC m=+0.075131074 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  7 04:58:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:10.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:10 np0005549474 python3[221292]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  7 04:58:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:58:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:10 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:11 np0005549474 python3.9[221446]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:11 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:11 np0005549474 python3.9[221524]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:12 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:12.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:58:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:58:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:12 np0005549474 python3.9[221676]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:12 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:13 np0005549474 python3.9[221756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:13 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8009a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:14 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:14 np0005549474 python3.9[221908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:14 np0005549474 python3.9[221986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:58:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:14 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:15 np0005549474 python3.9[222141]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:16 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0001fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:16 np0005549474 python3.9[222219]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:16.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:16 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:17.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:58:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:17 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:17.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:18 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:18 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:58:18 np0005549474 python3.9[222373]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:18.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:18 np0005549474 python3.9[222498]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101497.6008523-3899-103883661472332/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:18 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:19 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:19 np0005549474 python3.9[222652]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:19.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:19] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 04:58:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:19] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 04:58:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:20 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:20.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:20 np0005549474 python3.9[222829]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:58:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 04:58:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:20 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.042687) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501042724, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4132, "num_deletes": 502, "total_data_size": 8479824, "memory_usage": 8597936, "flush_reason": "Manual Compaction"}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501093107, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4743608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13330, "largest_seqno": 17461, "table_properties": {"data_size": 4731446, "index_size": 6992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4101, "raw_key_size": 32374, "raw_average_key_size": 19, "raw_value_size": 4702977, "raw_average_value_size": 2897, "num_data_blocks": 305, "num_entries": 1623, "num_filter_entries": 1623, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101044, "oldest_key_time": 1765101044, "file_creation_time": 1765101501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 50486 microseconds, and 8175 cpu microseconds.
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.093175) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4743608 bytes OK
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.093191) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.096152) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.096175) EVENT_LOG_v1 {"time_micros": 1765101501096170, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.096210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8463094, prev total WAL file size 8463094, number of live WAL files 2.
Dec  7 04:58:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:58:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.097849) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4632KB)], [32(12MB)]
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501097919, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 18214466, "oldest_snapshot_seqno": -1}
Dec  7 04:58:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5168 keys, 14111419 bytes, temperature: kUnknown
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501246817, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 14111419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14074687, "index_size": 22734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 129101, "raw_average_key_size": 24, "raw_value_size": 13978955, "raw_average_value_size": 2704, "num_data_blocks": 952, "num_entries": 5168, "num_filter_entries": 5168, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765101501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.247015) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 14111419 bytes
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.249632) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.3 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.5, 12.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 5974, records dropped: 806 output_compression: NoCompression
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.249650) EVENT_LOG_v1 {"time_micros": 1765101501249642, "job": 14, "event": "compaction_finished", "compaction_time_micros": 148948, "compaction_time_cpu_micros": 26910, "output_level": 6, "num_output_files": 1, "total_output_size": 14111419, "num_input_records": 5974, "num_output_records": 5168, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501250488, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501252794, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.097730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.252862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.252867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.252869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.252871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:21 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:21.252873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:21 np0005549474 python3.9[222985]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:21 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:21.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:22 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:22 np0005549474 python3.9[223138]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:58:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 04:58:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:22 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4003630 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:23 np0005549474 python3.9[223295]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:58:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:23 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:23.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:23 np0005549474 python3.9[223451]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:58:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:24 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0002170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:24 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:58:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:24.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:24 np0005549474 python3.9[223606]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:58:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:24 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc003940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.560546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505560585, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 299, "num_deletes": 251, "total_data_size": 84282, "memory_usage": 90128, "flush_reason": "Manual Compaction"}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505563043, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 83637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17462, "largest_seqno": 17760, "table_properties": {"data_size": 81708, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5144, "raw_average_key_size": 18, "raw_value_size": 77792, "raw_average_value_size": 280, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101502, "oldest_key_time": 1765101502, "file_creation_time": 1765101505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2532 microseconds, and 908 cpu microseconds.
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.563079) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 83637 bytes OK
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.563095) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.564783) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.564802) EVENT_LOG_v1 {"time_micros": 1765101505564796, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.564817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 82118, prev total WAL file size 82118, number of live WAL files 2.
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.566345) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(81KB)], [35(13MB)]
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505566835, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14195056, "oldest_snapshot_seqno": -1}
Dec  7 04:58:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:25 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:25 np0005549474 python3.9[223841]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4932 keys, 11979702 bytes, temperature: kUnknown
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505932748, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11979702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11946203, "index_size": 20077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124865, "raw_average_key_size": 25, "raw_value_size": 11856136, "raw_average_value_size": 2403, "num_data_blocks": 835, "num_entries": 4932, "num_filter_entries": 4932, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765101505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.933083) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11979702 bytes
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.935511) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.8 rd, 32.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.5 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(313.0) write-amplify(143.2) OK, records in: 5445, records dropped: 513 output_compression: NoCompression
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.935536) EVENT_LOG_v1 {"time_micros": 1765101505935524, "job": 16, "event": "compaction_finished", "compaction_time_micros": 365978, "compaction_time_cpu_micros": 39473, "output_level": 6, "num_output_files": 1, "total_output_size": 11979702, "num_input_records": 5445, "num_output_records": 4932, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505935953, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505938615, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.566252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:25 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 04:58:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:26 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.021615055 +0000 UTC m=+0.019739728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:58:26 np0005549474 python3.9[224067]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101505.1877816-4115-49973587403508/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:58:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:26.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.536154834 +0000 UTC m=+0.534279497 container create d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:58:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:58:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:58:26 np0005549474 systemd[1]: Started libpod-conmon-d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85.scope.
Dec  7 04:58:26 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.636115184 +0000 UTC m=+0.634239857 container init d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.643562686 +0000 UTC m=+0.641687339 container start d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_napier, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:58:26 np0005549474 strange_napier[224094]: 167 167
Dec  7 04:58:26 np0005549474 systemd[1]: libpod-d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85.scope: Deactivated successfully.
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.701183054 +0000 UTC m=+0.699307707 container attach d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_napier, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.702544381 +0000 UTC m=+0.700669044 container died d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:58:26 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4364c5b89c4a177fc7b784a33d60df2b6a6a9fe388fe4d365111933e773b8868-merged.mount: Deactivated successfully.
Dec  7 04:58:26 np0005549474 podman[224025]: 2025-12-07 09:58:26.744967865 +0000 UTC m=+0.743092518 container remove d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 04:58:26 np0005549474 systemd[1]: libpod-conmon-d69545e1990ae93a972dcbec9b82ec581783ea3d2d0fc0928d501190803d3e85.scope: Deactivated successfully.
Dec  7 04:58:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:58:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:26 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:26 np0005549474 podman[224220]: 2025-12-07 09:58:26.888297345 +0000 UTC m=+0.027720986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:58:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:27.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:58:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:27.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:58:27 np0005549474 podman[224220]: 2025-12-07 09:58:27.08039963 +0000 UTC m=+0.219823271 container create 44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:58:27 np0005549474 python3.9[224263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:27 np0005549474 systemd[1]: Started libpod-conmon-44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a.scope.
Dec  7 04:58:27 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:58:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c3df858b747645c7c737fc95382ae08938364961370127c57dea57a2e892df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c3df858b747645c7c737fc95382ae08938364961370127c57dea57a2e892df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c3df858b747645c7c737fc95382ae08938364961370127c57dea57a2e892df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c3df858b747645c7c737fc95382ae08938364961370127c57dea57a2e892df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c3df858b747645c7c737fc95382ae08938364961370127c57dea57a2e892df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:27 np0005549474 podman[224220]: 2025-12-07 09:58:27.376076805 +0000 UTC m=+0.515500446 container init 44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_keldysh, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:58:27 np0005549474 podman[224220]: 2025-12-07 09:58:27.386039716 +0000 UTC m=+0.525463357 container start 44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec  7 04:58:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:58:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:58:27 np0005549474 podman[224220]: 2025-12-07 09:58:27.517515383 +0000 UTC m=+0.656939034 container attach 44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_keldysh, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:58:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:27 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc003940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:27 np0005549474 python3.9[224394]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101506.648344-4160-266057102571979/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:27 np0005549474 great_keldysh[224266]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:58:27 np0005549474 great_keldysh[224266]: --> All data devices are unavailable
Dec  7 04:58:27 np0005549474 systemd[1]: libpod-44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a.scope: Deactivated successfully.
Dec  7 04:58:27 np0005549474 podman[224220]: 2025-12-07 09:58:27.738377521 +0000 UTC m=+0.877801142 container died 44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_keldysh, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:58:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:27.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:27 np0005549474 systemd[1]: var-lib-containers-storage-overlay-88c3df858b747645c7c737fc95382ae08938364961370127c57dea57a2e892df-merged.mount: Deactivated successfully.
Dec  7 04:58:27 np0005549474 podman[224220]: 2025-12-07 09:58:27.952255311 +0000 UTC m=+1.091678932 container remove 44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_keldysh, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 04:58:27 np0005549474 systemd[1]: libpod-conmon-44dbd017ee71d36301a8b9d1ab2e3e4941bbd50ed7589ab71727eab9783d7f9a.scope: Deactivated successfully.
Dec  7 04:58:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:28 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:28.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:28 np0005549474 python3.9[224622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:28 np0005549474 podman[224661]: 2025-12-07 09:58:28.504632029 +0000 UTC m=+0.021225879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:58:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:58:28 np0005549474 podman[224661]: 2025-12-07 09:58:28.883687811 +0000 UTC m=+0.400281631 container create 709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:58:28 np0005549474 systemd[1]: Started libpod-conmon-709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae.scope.
Dec  7 04:58:28 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:58:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:28 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:28 np0005549474 podman[224661]: 2025-12-07 09:58:28.98142364 +0000 UTC m=+0.498017480 container init 709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldwasser, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:58:28 np0005549474 podman[224661]: 2025-12-07 09:58:28.989810688 +0000 UTC m=+0.506404508 container start 709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:58:28 np0005549474 hopeful_goldwasser[224801]: 167 167
Dec  7 04:58:28 np0005549474 systemd[1]: libpod-709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae.scope: Deactivated successfully.
Dec  7 04:58:29 np0005549474 podman[224661]: 2025-12-07 09:58:29.066413312 +0000 UTC m=+0.583007152 container attach 709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 04:58:29 np0005549474 podman[224661]: 2025-12-07 09:58:29.066874815 +0000 UTC m=+0.583468625 container died 709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldwasser, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:58:29 np0005549474 python3.9[224798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101508.0150087-4205-102453886548862/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:29 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1452cc6c034eb263aea7b7633cad474ba7fbfe6906d25da9a5a663b48aade6a5-merged.mount: Deactivated successfully.
Dec  7 04:58:29 np0005549474 podman[224661]: 2025-12-07 09:58:29.203122741 +0000 UTC m=+0.719716561 container remove 709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:58:29 np0005549474 systemd[1]: libpod-conmon-709f4b850ed0e02aa0a9a7e1bc404ff317312dfea1e4b4644ca6ee87b787e6ae.scope: Deactivated successfully.
Dec  7 04:58:29 np0005549474 podman[224850]: 2025-12-07 09:58:29.335563264 +0000 UTC m=+0.022705049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:58:29 np0005549474 podman[224850]: 2025-12-07 09:58:29.45669591 +0000 UTC m=+0.143837685 container create c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hugle, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 04:58:29 np0005549474 systemd[1]: Started libpod-conmon-c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003.scope.
Dec  7 04:58:29 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:58:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af5892cd10f2400d13063e52da94cf0fa8252ea82fb4614915bf72fb1a0e846/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af5892cd10f2400d13063e52da94cf0fa8252ea82fb4614915bf72fb1a0e846/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af5892cd10f2400d13063e52da94cf0fa8252ea82fb4614915bf72fb1a0e846/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af5892cd10f2400d13063e52da94cf0fa8252ea82fb4614915bf72fb1a0e846/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:29 np0005549474 podman[224850]: 2025-12-07 09:58:29.541843886 +0000 UTC m=+0.228985671 container init c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:58:29 np0005549474 podman[224850]: 2025-12-07 09:58:29.547574652 +0000 UTC m=+0.234716417 container start c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:58:29 np0005549474 podman[224850]: 2025-12-07 09:58:29.550241715 +0000 UTC m=+0.237383470 container attach c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:58:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:29 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]: {
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:    "0": [
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:        {
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "devices": [
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "/dev/loop3"
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            ],
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "lv_name": "ceph_lv0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "lv_size": "21470642176",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "name": "ceph_lv0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "tags": {
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.cluster_name": "ceph",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.crush_device_class": "",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.encrypted": "0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.osd_id": "0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.type": "block",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.vdo": "0",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:                "ceph.with_tpm": "0"
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            },
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "type": "block",
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:            "vg_name": "ceph_vg0"
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:        }
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]:    ]
Dec  7 04:58:29 np0005549474 vigilant_hugle[224918]: }
Dec  7 04:58:29 np0005549474 systemd[1]: libpod-c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003.scope: Deactivated successfully.
Dec  7 04:58:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:29 np0005549474 podman[224850]: 2025-12-07 09:58:29.831441845 +0000 UTC m=+0.518583610 container died c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hugle, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:58:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:29 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9af5892cd10f2400d13063e52da94cf0fa8252ea82fb4614915bf72fb1a0e846-merged.mount: Deactivated successfully.
Dec  7 04:58:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:58:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:58:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:30 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:30 np0005549474 python3.9[225000]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:58:30 np0005549474 podman[224850]: 2025-12-07 09:58:30.06318966 +0000 UTC m=+0.750331435 container remove c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 04:58:30 np0005549474 systemd[1]: libpod-conmon-c7dfab71e67414bab913a50480728e0b8bd18b967d7fa4cd1fdecf7a4f655003.scope: Deactivated successfully.
Dec  7 04:58:30 np0005549474 systemd[1]: Reloading.
Dec  7 04:58:30 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:58:30 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:58:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:30.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:30 np0005549474 systemd[1]: Reached target edpm_libvirt.target.
Dec  7 04:58:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095830 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:58:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:58:30 np0005549474 podman[225186]: 2025-12-07 09:58:30.949418791 +0000 UTC m=+0.037586464 container create 2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:58:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:30 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:31 np0005549474 podman[225186]: 2025-12-07 09:58:30.93211561 +0000 UTC m=+0.020283303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:58:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:31 np0005549474 systemd[1]: Started libpod-conmon-2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d.scope.
Dec  7 04:58:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:58:31 np0005549474 podman[225186]: 2025-12-07 09:58:31.163888115 +0000 UTC m=+0.252055838 container init 2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:58:31 np0005549474 podman[225186]: 2025-12-07 09:58:31.17102937 +0000 UTC m=+0.259197043 container start 2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Dec  7 04:58:31 np0005549474 podman[225186]: 2025-12-07 09:58:31.175033809 +0000 UTC m=+0.263201512 container attach 2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 04:58:31 np0005549474 tender_wilson[225284]: 167 167
Dec  7 04:58:31 np0005549474 systemd[1]: libpod-2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d.scope: Deactivated successfully.
Dec  7 04:58:31 np0005549474 podman[225186]: 2025-12-07 09:58:31.177359962 +0000 UTC m=+0.265527635 container died 2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 04:58:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5766c9ba986188a4c975dea84e31030c50c44789b4157324b0ae5a312bd1e095-merged.mount: Deactivated successfully.
Dec  7 04:58:31 np0005549474 podman[225186]: 2025-12-07 09:58:31.21625028 +0000 UTC m=+0.304417953 container remove 2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:58:31 np0005549474 systemd[1]: libpod-conmon-2b1b3be754c0d65f8eecfddfdc08f56b3fd44bd2d8dcfbf164843cbecc9f682d.scope: Deactivated successfully.
Dec  7 04:58:31 np0005549474 podman[225339]: 2025-12-07 09:58:31.387568341 +0000 UTC m=+0.054636358 container create d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:58:31 np0005549474 systemd[1]: Started libpod-conmon-d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9.scope.
Dec  7 04:58:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:58:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe536949fef97e584b5aba7bf8a5950ac3c8dbc93e55ad6f65c37a2f0adf9618/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe536949fef97e584b5aba7bf8a5950ac3c8dbc93e55ad6f65c37a2f0adf9618/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe536949fef97e584b5aba7bf8a5950ac3c8dbc93e55ad6f65c37a2f0adf9618/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe536949fef97e584b5aba7bf8a5950ac3c8dbc93e55ad6f65c37a2f0adf9618/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:58:31 np0005549474 podman[225339]: 2025-12-07 09:58:31.366256831 +0000 UTC m=+0.033324938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:58:31 np0005549474 podman[225339]: 2025-12-07 09:58:31.465171022 +0000 UTC m=+0.132239049 container init d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 04:58:31 np0005549474 podman[225339]: 2025-12-07 09:58:31.473150459 +0000 UTC m=+0.140218466 container start d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 04:58:31 np0005549474 podman[225339]: 2025-12-07 09:58:31.476527631 +0000 UTC m=+0.143595648 container attach d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:58:31 np0005549474 python3.9[225333]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  7 04:58:31 np0005549474 systemd[1]: Reloading.
Dec  7 04:58:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:31 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:31 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:58:31 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:58:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:31.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:31 np0005549474 systemd[1]: Reloading.
Dec  7 04:58:32 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:58:32 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:58:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:32 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:32 np0005549474 naughty_lichterman[225356]: {}
Dec  7 04:58:32 np0005549474 podman[225339]: 2025-12-07 09:58:32.199801788 +0000 UTC m=+0.866869815 container died d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 04:58:32 np0005549474 systemd[1]: libpod-d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9.scope: Deactivated successfully.
Dec  7 04:58:32 np0005549474 systemd[1]: libpod-d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9.scope: Consumed 1.096s CPU time.
Dec  7 04:58:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-fe536949fef97e584b5aba7bf8a5950ac3c8dbc93e55ad6f65c37a2f0adf9618-merged.mount: Deactivated successfully.
Dec  7 04:58:32 np0005549474 podman[225339]: 2025-12-07 09:58:32.245174303 +0000 UTC m=+0.912242320 container remove d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 04:58:32 np0005549474 systemd[1]: libpod-conmon-d471659fd364aaf6921705dc3b59b1e0a96ac9367b1461c75b1d645108964ac9.scope: Deactivated successfully.
Dec  7 04:58:32 np0005549474 lvm[225516]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:58:32 np0005549474 lvm[225516]: VG ceph_vg0 finished
Dec  7 04:58:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:58:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:58:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:32 np0005549474 lvm[225530]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:58:32 np0005549474 lvm[225530]: VG ceph_vg0 finished
Dec  7 04:58:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:32.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec  7 04:58:32 np0005549474 systemd[1]: session-53.scope: Deactivated successfully.
Dec  7 04:58:32 np0005549474 systemd[1]: session-53.scope: Consumed 3min 17.230s CPU time.
Dec  7 04:58:32 np0005549474 systemd-logind[796]: Session 53 logged out. Waiting for processes to exit.
Dec  7 04:58:32 np0005549474 systemd-logind[796]: Removed session 53.
Dec  7 04:58:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:32 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:33 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:58:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:33 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:33.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:34 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:34.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  7 04:58:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:34 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:35 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:35.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:36 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:36 np0005549474 podman[225572]: 2025-12-07 09:58:36.266119525 +0000 UTC m=+0.076728418 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  7 04:58:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:36.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:58:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:36 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:37.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:58:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:37.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:58:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:37 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:37.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:38 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:38.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:58:38.610 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:58:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:58:38.610 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:58:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:58:38.611 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:58:38 np0005549474 systemd-logind[796]: New session 54 of user zuul.
Dec  7 04:58:38 np0005549474 systemd[1]: Started Session 54 of User zuul.
Dec  7 04:58:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:58:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:38 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:39 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:39 np0005549474 python3.9[225755]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 04:58:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:39.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:58:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:58:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:40 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:40.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:58:40 np0005549474 podman[225909]: 2025-12-07 09:58:40.965313601 +0000 UTC m=+0.060636451 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  7 04:58:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:40 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:41 np0005549474 python3.9[225947]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:58:41 np0005549474 network[225972]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:58:41 np0005549474 network[225973]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:58:41 np0005549474 network[225974]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:58:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:41 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:41.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:42 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:42.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:58:42
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['volumes', 'images', '.nfs', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', '.rgw.root']
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:58:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:58:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:58:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:58:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:42 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:43 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c001460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:58:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:43.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:58:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:44 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:44.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:58:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:44 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5698003d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:45 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:45 np0005549474 python3.9[226256]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 04:58:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:46 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c001460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:46.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:46 np0005549474 python3.9[226341]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 04:58:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:46 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:47.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:58:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:47 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:48 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:48.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:48 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c001460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:49 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:49.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095849 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:58:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:49] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:58:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:49] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:58:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:50 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:50.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095850 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:58:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:50 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:51 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:51.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:52 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:52.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  7 04:58:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:52 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:53 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:53 np0005549474 python3.9[226502]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:58:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:53.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:54 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:54.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:54 np0005549474 python3.9[226654]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:58:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 04:58:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:54 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:55 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:55 np0005549474 python3.9[226809]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:58:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:55.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:56 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:58:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:58:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:58:56 np0005549474 python3.9[226961]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:58:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  7 04:58:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:56 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:58:57.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:58:57 np0005549474 python3.9[227116]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:58:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:58:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:58:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:57 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:58:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:58:58 np0005549474 python3.9[227239]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101536.8738875-245-124104360399704/.source.iscsi _original_basename=.pd3k0w2h follow=False checksum=3689deb023d3c5fd8fb4d5221a1340afa96f77ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:58 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:58:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:58.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:58:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:58 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:58:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  7 04:58:58 np0005549474 python3.9[227392]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:58 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:58:59 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:58:59 np0005549474 python3.9[227545]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:58:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:58:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:58:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:58:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:59] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:58:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:58:59] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:59:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:00 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:00.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:59:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:00 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:01 np0005549474 python3.9[227722]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:59:01 np0005549474 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  7 04:59:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:01 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:59:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:01 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:59:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:01 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:01.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:02 np0005549474 python3.9[227880]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:59:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:02 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:02 np0005549474 systemd[1]: Reloading.
Dec  7 04:59:02 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:59:02 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:59:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:02.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:02 np0005549474 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  7 04:59:02 np0005549474 systemd[1]: Starting Open-iSCSI...
Dec  7 04:59:02 np0005549474 kernel: Loading iSCSI transport class v2.0-870.
Dec  7 04:59:02 np0005549474 systemd[1]: Started Open-iSCSI.
Dec  7 04:59:02 np0005549474 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  7 04:59:02 np0005549474 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  7 04:59:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 04:59:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:02 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:03 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:59:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:03 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:03 np0005549474 python3.9[228083]: ansible-ansible.builtin.service_facts Invoked
Dec  7 04:59:03 np0005549474 network[228100]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 04:59:03 np0005549474 network[228101]: 'network-scripts' will be removed from distribution in near future.
Dec  7 04:59:03 np0005549474 network[228102]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 04:59:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:03.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:04 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:04.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:59:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:04 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:05 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:59:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:05.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:59:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:06 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:06 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:59:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:06.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:59:06 np0005549474 podman[228154]: 2025-12-07 09:59:06.810946524 +0000 UTC m=+0.095170800 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  7 04:59:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:06 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:07.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 04:59:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:07.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:59:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:07 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:07.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:08 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:08.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:59:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:08 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c00037e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:09 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:59:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:09.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:59:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095909 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:59:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:09] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:59:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:09] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:59:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 04:59:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3912 writes, 17K keys, 3912 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 3912 writes, 3912 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1419 writes, 6125 keys, 1419 commit groups, 1.0 writes per commit group, ingest: 10.81 MB, 0.02 MB/s#012Interval WAL: 1419 writes, 1419 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.7      0.53              0.06         8    0.067       0      0       0.0       0.0#012  L6      1/0   11.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     78.8     67.2      1.30              0.23         7    0.186     32K   3633       0.0       0.0#012 Sum      1/0   11.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     55.9     62.1      1.83              0.29        15    0.122     32K   3633       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     68.3     66.0      0.89              0.15         8    0.111     21K   2283       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     78.8     67.2      1.30              0.23         7    0.186     32K   3633       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.2      0.53              0.06         7    0.075       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.09 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.8 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5637d9ea7350#2 capacity: 304.00 MB usage: 5.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(345,5.69 MB,1.8733%) FilterBlock(16,101.67 KB,0.0326608%) IndexBlock(16,186.11 KB,0.0597853%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 04:59:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:10 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:10.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  7 04:59:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:10 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:11 np0005549474 podman[228280]: 2025-12-07 09:59:11.253814686 +0000 UTC m=+0.062262885 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 04:59:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:11 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c0003800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:12 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:12.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:59:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:59:12 np0005549474 python3.9[228428]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  7 04:59:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095912 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:59:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec  7 04:59:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:12 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:13 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:13 np0005549474 python3.9[228584]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  7 04:59:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:14 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:14.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:14 np0005549474 python3.9[228740]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec  7 04:59:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:14 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:15 np0005549474 python3.9[228864]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101553.8815346-476-229831777353481/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:15 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56c8001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:16 np0005549474 python3.9[229017]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:16 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:59:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:16 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:17.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:59:17 np0005549474 python3.9[229170]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 04:59:17 np0005549474 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  7 04:59:17 np0005549474 systemd[1]: Stopped Load Kernel Modules.
Dec  7 04:59:17 np0005549474 systemd[1]: Stopping Load Kernel Modules...
Dec  7 04:59:17 np0005549474 systemd[1]: Starting Load Kernel Modules...
Dec  7 04:59:17 np0005549474 systemd[1]: Finished Load Kernel Modules.
Dec  7 04:59:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:17 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:17.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:18 np0005549474 python3.9[229327]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:18 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:18.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:59:18 np0005549474 python3.9[229479]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:59:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:18 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56a4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:19 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:19 np0005549474 python3.9[229633]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:59:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.003000081s ======
Dec  7 04:59:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:19.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Dec  7 04:59:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:59:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:59:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:20 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56bc004650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:20 np0005549474 python3.9[229810]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:59:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[198817]: 07/12/2025 09:59:20 : epoch 69354f3f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f569c003030 fd 48 proxy ignored for local
Dec  7 04:59:21 np0005549474 kernel: ganesha.nfsd[225981]: segfault at 50 ip 00007f5774c2432e sp 00007f572dffa210 error 4 in libntirpc.so.5.8[7f5774c09000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  7 04:59:21 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 04:59:21 np0005549474 systemd[1]: Started Process Core Dump (PID 229935/UID 0).
Dec  7 04:59:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:21 np0005549474 python3.9[229934]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101560.0692613-650-232576512855115/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:21.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:21 np0005549474 python3.9[230089]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 04:59:22 np0005549474 systemd-coredump[229936]: Process 198853 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 65:#012#0  0x00007f5774c2432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 04:59:22 np0005549474 systemd[1]: systemd-coredump@6-229935-0.service: Deactivated successfully.
Dec  7 04:59:22 np0005549474 systemd[1]: systemd-coredump@6-229935-0.service: Consumed 1.191s CPU time.
Dec  7 04:59:22 np0005549474 podman[230119]: 2025-12-07 09:59:22.340366703 +0000 UTC m=+0.027610372 container died 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:59:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ad500ab7c37b37093bb23c6dbe26910d6e8403f74bd6e3b1691a23d9f8f84ca7-merged.mount: Deactivated successfully.
Dec  7 04:59:22 np0005549474 podman[230119]: 2025-12-07 09:59:22.384290259 +0000 UTC m=+0.071533878 container remove 4fe5129e4819b196cd3a0745e7b23a351c15dcf95585e57929f602b1fadb6b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:59:22 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 04:59:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:22.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:22 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 04:59:22 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.651s CPU time.
Dec  7 04:59:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:59:22 np0005549474 python3.9[230286]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:59:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:23.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:59:24 np0005549474 python3.9[230440]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:24.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:24 np0005549474 python3.9[230592]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 04:59:25 np0005549474 python3.9[230746]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:26 np0005549474 python3.9[230898]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:26.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:59:26 np0005549474 python3.9[231051]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095927 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:59:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:27.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:59:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:59:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:59:27 np0005549474 python3.9[231204]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:27.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:28.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 04:59:29 np0005549474 python3.9[231359]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:59:29 np0005549474 python3.9[231516]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:29.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:59:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:29] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:59:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:30.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095930 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 04:59:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 04:59:30 np0005549474 python3.9[231668]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:31 np0005549474 python3.9[231822]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:32 np0005549474 python3.9[231900]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:32.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:32 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 7.
Dec  7 04:59:32 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:59:32 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.651s CPU time.
Dec  7 04:59:32 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 04:59:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  7 04:59:32 np0005549474 podman[232152]: 2025-12-07 09:59:32.889049408 +0000 UTC m=+0.038292903 container create 42f24b76c048cec8a97922aac30af331b19822c877a7113307baef1d461d712c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 04:59:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5e2ddb05301261c82d9e0e55c8cd87175e6126422de9b58207fe1e9b739f05/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5e2ddb05301261c82d9e0e55c8cd87175e6126422de9b58207fe1e9b739f05/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5e2ddb05301261c82d9e0e55c8cd87175e6126422de9b58207fe1e9b739f05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5e2ddb05301261c82d9e0e55c8cd87175e6126422de9b58207fe1e9b739f05/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:32 np0005549474 podman[232152]: 2025-12-07 09:59:32.961274393 +0000 UTC m=+0.110517908 container init 42f24b76c048cec8a97922aac30af331b19822c877a7113307baef1d461d712c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 04:59:32 np0005549474 podman[232152]: 2025-12-07 09:59:32.966377041 +0000 UTC m=+0.115620536 container start 42f24b76c048cec8a97922aac30af331b19822c877a7113307baef1d461d712c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:59:32 np0005549474 podman[232152]: 2025-12-07 09:59:32.870836122 +0000 UTC m=+0.020079637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:32 np0005549474 bash[232152]: 42f24b76c048cec8a97922aac30af331b19822c877a7113307baef1d461d712c
Dec  7 04:59:32 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 04:59:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:32 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 04:59:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:32 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 04:59:33 np0005549474 python3.9[232146]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 04:59:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 04:59:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 04:59:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 04:59:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 04:59:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 04:59:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 04:59:33 np0005549474 python3.9[232316]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:33 np0005549474 podman[232433]: 2025-12-07 09:59:33.890832072 +0000 UTC m=+0.045945071 container create 01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:59:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:33.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:33 np0005549474 systemd[1]: Started libpod-conmon-01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854.scope.
Dec  7 04:59:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:59:33 np0005549474 podman[232433]: 2025-12-07 09:59:33.871750634 +0000 UTC m=+0.026863643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:33 np0005549474 podman[232433]: 2025-12-07 09:59:33.966583434 +0000 UTC m=+0.121696443 container init 01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:59:33 np0005549474 podman[232433]: 2025-12-07 09:59:33.976255516 +0000 UTC m=+0.131368515 container start 01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_khorana, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:59:33 np0005549474 podman[232433]: 2025-12-07 09:59:33.979120614 +0000 UTC m=+0.134233613 container attach 01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 04:59:33 np0005549474 systemd[1]: libpod-01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854.scope: Deactivated successfully.
Dec  7 04:59:33 np0005549474 awesome_khorana[232472]: 167 167
Dec  7 04:59:33 np0005549474 podman[232433]: 2025-12-07 09:59:33.984304845 +0000 UTC m=+0.139417844 container died 01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:59:33 np0005549474 conmon[232472]: conmon 01179e7815a47812f783 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854.scope/container/memory.events
Dec  7 04:59:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3f48cf06e72d82c409fdb06950b681cb9267a0d5fa82de19aaefafbef10a3d15-merged.mount: Deactivated successfully.
Dec  7 04:59:34 np0005549474 podman[232433]: 2025-12-07 09:59:34.014262731 +0000 UTC m=+0.169375730 container remove 01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:59:34 np0005549474 systemd[1]: libpod-conmon-01179e7815a47812f783b43b64680e4173f979e815dce0a73095c4c3fe8a2854.scope: Deactivated successfully.
Dec  7 04:59:34 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 04:59:34 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:34 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:34 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.155294667 +0000 UTC m=+0.036802112 container create a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_williamson, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 04:59:34 np0005549474 systemd[1]: Started libpod-conmon-a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286.scope.
Dec  7 04:59:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:59:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d0578e8fda51def73c0623d6ebdb168f212b08a334831f17b3e9c6390ccdde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d0578e8fda51def73c0623d6ebdb168f212b08a334831f17b3e9c6390ccdde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d0578e8fda51def73c0623d6ebdb168f212b08a334831f17b3e9c6390ccdde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d0578e8fda51def73c0623d6ebdb168f212b08a334831f17b3e9c6390ccdde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d0578e8fda51def73c0623d6ebdb168f212b08a334831f17b3e9c6390ccdde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.140017871 +0000 UTC m=+0.021525336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.236300871 +0000 UTC m=+0.117808356 container init a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.242794028 +0000 UTC m=+0.124301473 container start a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.245614424 +0000 UTC m=+0.127121889 container attach a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_williamson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 04:59:34 np0005549474 python3.9[232619]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:34.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:34 np0005549474 loving_williamson[232608]: --> passed data devices: 0 physical, 1 LVM
Dec  7 04:59:34 np0005549474 loving_williamson[232608]: --> All data devices are unavailable
Dec  7 04:59:34 np0005549474 systemd[1]: libpod-a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286.scope: Deactivated successfully.
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.616990818 +0000 UTC m=+0.498498263 container died a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_williamson, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:59:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-70d0578e8fda51def73c0623d6ebdb168f212b08a334831f17b3e9c6390ccdde-merged.mount: Deactivated successfully.
Dec  7 04:59:34 np0005549474 podman[232548]: 2025-12-07 09:59:34.656500313 +0000 UTC m=+0.538007748 container remove a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_williamson, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:59:34 np0005549474 systemd[1]: libpod-conmon-a2ee730800de8d003c7ff3f6924ae6ae7f824e9e2ec7ef8b93912b6697852286.scope: Deactivated successfully.
Dec  7 04:59:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.182339988 +0000 UTC m=+0.037062749 container create 4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:59:35 np0005549474 python3.9[232866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:35 np0005549474 systemd[1]: Started libpod-conmon-4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930.scope.
Dec  7 04:59:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.253127655 +0000 UTC m=+0.107850416 container init 4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.261842832 +0000 UTC m=+0.116565593 container start 4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.167595707 +0000 UTC m=+0.022318498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.265166342 +0000 UTC m=+0.119889123 container attach 4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 04:59:35 np0005549474 intelligent_bohr[232907]: 167 167
Dec  7 04:59:35 np0005549474 systemd[1]: libpod-4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930.scope: Deactivated successfully.
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.268385219 +0000 UTC m=+0.123107980 container died 4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 04:59:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-53566360b775a128e778cd985b6a544348e4253c89a9fd48d0cc79e98b90bdca-merged.mount: Deactivated successfully.
Dec  7 04:59:35 np0005549474 podman[232889]: 2025-12-07 09:59:35.299955098 +0000 UTC m=+0.154677849 container remove 4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 04:59:35 np0005549474 systemd[1]: libpod-conmon-4e6c34f3985f7425b1a80d164afbe1ee78f18ed72545ea077ee4cdfaf0776930.scope: Deactivated successfully.
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.455731956 +0000 UTC m=+0.040996326 container create 3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swanson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:59:35 np0005549474 systemd[1]: Started libpod-conmon-3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16.scope.
Dec  7 04:59:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:59:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19221a3db054833ac3735f4cd8e6530edef278dcc54f1c62227a59bb36386/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19221a3db054833ac3735f4cd8e6530edef278dcc54f1c62227a59bb36386/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19221a3db054833ac3735f4cd8e6530edef278dcc54f1c62227a59bb36386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19221a3db054833ac3735f4cd8e6530edef278dcc54f1c62227a59bb36386/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.438081707 +0000 UTC m=+0.023346097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.537283245 +0000 UTC m=+0.122547705 container init 3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.545249642 +0000 UTC m=+0.130514052 container start 3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swanson, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.549139907 +0000 UTC m=+0.134404377 container attach 3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 04:59:35 np0005549474 python3.9[233029]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:35 np0005549474 sad_swanson[233014]: {
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:    "0": [
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:        {
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "devices": [
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "/dev/loop3"
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            ],
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "lv_name": "ceph_lv0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "lv_size": "21470642176",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "name": "ceph_lv0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "tags": {
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.cephx_lockbox_secret": "",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.cluster_name": "ceph",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.crush_device_class": "",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.encrypted": "0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.osd_id": "0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.type": "block",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.vdo": "0",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:                "ceph.with_tpm": "0"
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            },
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "type": "block",
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:            "vg_name": "ceph_vg0"
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:        }
Dec  7 04:59:35 np0005549474 sad_swanson[233014]:    ]
Dec  7 04:59:35 np0005549474 sad_swanson[233014]: }
Dec  7 04:59:35 np0005549474 systemd[1]: libpod-3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16.scope: Deactivated successfully.
Dec  7 04:59:35 np0005549474 conmon[233014]: conmon 3b5e00f31db6a7d7d93e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16.scope/container/memory.events
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.863357017 +0000 UTC m=+0.448621397 container died 3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 04:59:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e8a19221a3db054833ac3735f4cd8e6530edef278dcc54f1c62227a59bb36386-merged.mount: Deactivated successfully.
Dec  7 04:59:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:35.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:35 np0005549474 podman[232956]: 2025-12-07 09:59:35.915933756 +0000 UTC m=+0.501198166 container remove 3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swanson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 04:59:35 np0005549474 systemd[1]: libpod-conmon-3b5e00f31db6a7d7d93e5d608ac7358afdc3211f8758f7eb5ce5ab33e48c8f16.scope: Deactivated successfully.
Dec  7 04:59:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:36.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.560640616 +0000 UTC m=+0.045246042 container create b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mclean, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 04:59:36 np0005549474 systemd[1]: Started libpod-conmon-b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16.scope.
Dec  7 04:59:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.541870536 +0000 UTC m=+0.026476012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.656002271 +0000 UTC m=+0.140607747 container init b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.663150915 +0000 UTC m=+0.147756361 container start b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.666701052 +0000 UTC m=+0.151306518 container attach b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:59:36 np0005549474 nostalgic_mclean[233304]: 167 167
Dec  7 04:59:36 np0005549474 systemd[1]: libpod-b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16.scope: Deactivated successfully.
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.668789969 +0000 UTC m=+0.153395405 container died b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec  7 04:59:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e4c24f5f5411f9b9a0148bb3f618118df1dd4f98d17627903c575c47168038d4-merged.mount: Deactivated successfully.
Dec  7 04:59:36 np0005549474 podman[233258]: 2025-12-07 09:59:36.703493802 +0000 UTC m=+0.188099228 container remove b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 04:59:36 np0005549474 systemd[1]: libpod-conmon-b5ba016e0650842d90e1703f3b6fabb43bd77b7e6cd16757ab199a37619aea16.scope: Deactivated successfully.
Dec  7 04:59:36 np0005549474 python3.9[233301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:59:36 np0005549474 podman[233330]: 2025-12-07 09:59:36.864861783 +0000 UTC m=+0.035867467 container create 0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:59:36 np0005549474 systemd[1]: Started libpod-conmon-0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e.scope.
Dec  7 04:59:36 np0005549474 systemd[1]: Started libcrun container.
Dec  7 04:59:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d850b7142190734d7d4756337a53aa46b8d16709a666e1a4d82157c041e9078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d850b7142190734d7d4756337a53aa46b8d16709a666e1a4d82157c041e9078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d850b7142190734d7d4756337a53aa46b8d16709a666e1a4d82157c041e9078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:36 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d850b7142190734d7d4756337a53aa46b8d16709a666e1a4d82157c041e9078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 04:59:36 np0005549474 podman[233330]: 2025-12-07 09:59:36.917730402 +0000 UTC m=+0.088736096 container init 0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:59:36 np0005549474 podman[233330]: 2025-12-07 09:59:36.925268127 +0000 UTC m=+0.096273811 container start 0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 04:59:36 np0005549474 podman[233330]: 2025-12-07 09:59:36.928226537 +0000 UTC m=+0.099232231 container attach 0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 04:59:36 np0005549474 podman[233330]: 2025-12-07 09:59:36.849957328 +0000 UTC m=+0.020963032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 04:59:36 np0005549474 podman[233344]: 2025-12-07 09:59:36.998643613 +0000 UTC m=+0.100001682 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  7 04:59:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:37.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:59:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:37.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:59:37 np0005549474 python3.9[233465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:37 np0005549474 lvm[233546]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 04:59:37 np0005549474 lvm[233546]: VG ceph_vg0 finished
Dec  7 04:59:37 np0005549474 practical_pascal[233347]: {}
Dec  7 04:59:37 np0005549474 systemd[1]: libpod-0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e.scope: Deactivated successfully.
Dec  7 04:59:37 np0005549474 systemd[1]: libpod-0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e.scope: Consumed 1.136s CPU time.
Dec  7 04:59:37 np0005549474 podman[233330]: 2025-12-07 09:59:37.686046384 +0000 UTC m=+0.857052068 container died 0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 04:59:37 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6d850b7142190734d7d4756337a53aa46b8d16709a666e1a4d82157c041e9078-merged.mount: Deactivated successfully.
Dec  7 04:59:37 np0005549474 podman[233330]: 2025-12-07 09:59:37.728222931 +0000 UTC m=+0.899228615 container remove 0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 04:59:37 np0005549474 systemd[1]: libpod-conmon-0a96c172da19f613c28635821ba6ce80f05998735bea4e4fddfb09ac8729900e.scope: Deactivated successfully.
Dec  7 04:59:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 04:59:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 04:59:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:37.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:38 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 04:59:38 np0005549474 python3.9[233714]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:59:38 np0005549474 systemd[1]: Reloading.
Dec  7 04:59:38 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:59:38 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:59:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:38.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:59:38.611 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 04:59:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:59:38.612 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 04:59:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 09:59:38.612 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 04:59:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 04:59:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:39 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:59:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:39 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:59:39 np0005549474 python3.9[233904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:39.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:39 np0005549474 python3.9[233982]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:59:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:39] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 04:59:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 04:59:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:40.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 04:59:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 04:59:40 np0005549474 python3.9[234159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:41 np0005549474 podman[234211]: 2025-12-07 09:59:41.489405327 +0000 UTC m=+0.066070709 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  7 04:59:41 np0005549474 python3.9[234258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_09:59:42
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'images', 'volumes', 'vms', '.nfs', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 04:59:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:59:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:59:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:42.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:59:42 np0005549474 python3.9[234410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 04:59:42 np0005549474 systemd[1]: Reloading.
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 04:59:42 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 04:59:42 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 04:59:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 04:59:43 np0005549474 systemd[1]: Starting Create netns directory...
Dec  7 04:59:43 np0005549474 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  7 04:59:43 np0005549474 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  7 04:59:43 np0005549474 systemd[1]: Finished Create netns directory.
Dec  7 04:59:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:43.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:44 np0005549474 python3.9[234605]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:59:44 np0005549474 python3.9[234758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 04:59:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:45 np0005549474 python3.9[234894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101584.525121-1271-257440184396697/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:46 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 04:59:46 np0005549474 python3.9[235050]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 04:59:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:47 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:47.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:59:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:47 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:47 np0005549474 python3.9[235204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 04:59:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:47.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:48 np0005549474 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  7 04:59:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:48 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:48 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 04:59:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:48 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 04:59:48 np0005549474 python3.9[235328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101587.2179399-1346-163630665718043/.source.json _original_basename=.oy60kpks follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 04:59:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095949 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:59:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:49 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:49 np0005549474 python3.9[235481]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 04:59:49 np0005549474 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  7 04:59:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:49 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:49] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:59:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:49] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 04:59:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:50 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  7 04:59:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec  7 04:59:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 04:59:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:52 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:52.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:59:52 np0005549474 python3.9[235912]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  7 04:59:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:53 np0005549474 python3.9[236066]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  7 04:59:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:53.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:54 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:54.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/095954 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 04:59:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 04:59:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:55 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:55 np0005549474 python3.9[236219]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  7 04:59:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:55 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4002050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:55.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 04:59:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:56 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:56.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 04:59:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:57 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 04:59:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T09:59:57.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 04:59:57 np0005549474 python3[236399]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  7 04:59:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 04:59:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 04:59:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:57 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:57.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:58 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 04:59:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 04:59:58 np0005549474 podman[236413]: 2025-12-07 09:59:58.510298123 +0000 UTC m=+1.064146420 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec  7 04:59:58 np0005549474 podman[236472]: 2025-12-07 09:59:58.656934624 +0000 UTC m=+0.044668961 container create 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  7 04:59:58 np0005549474 podman[236472]: 2025-12-07 09:59:58.633018077 +0000 UTC m=+0.020752434 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec  7 04:59:58 np0005549474 python3[236399]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec  7 04:59:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 04:59:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:59 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:59 np0005549474 python3.9[236665]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 04:59:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 09:59:59 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 04:59:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 04:59:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 04:59:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 04:59:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:59] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 04:59:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:09:59:59] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:00:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  7 05:00:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:00 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4002050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:00 np0005549474 ceph-mon[74516]: overall HEALTH_OK
Dec  7 05:00:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:00.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:00 np0005549474 python3.9[236844]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 05:00:00 np0005549474 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  7 05:00:01 np0005549474 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  7 05:00:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:01 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:01 np0005549474 python3.9[236921]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:00:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:01 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:01.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:01 np0005549474 python3.9[237075]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101601.3551502-1610-245141882322267/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:02 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:02.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:02 np0005549474 python3.9[237151]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 05:00:02 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:02 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:02 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:00:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:03 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:03 np0005549474 python3.9[237265]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:03 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:03 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:03 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:03 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:03.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:03 np0005549474 systemd[1]: Starting multipathd container...
Dec  7 05:00:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df22171396f28738df5d4e90238ae9985aee9622fa4c78356785f039a1e697ef/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df22171396f28738df5d4e90238ae9985aee9622fa4c78356785f039a1e697ef/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:04 np0005549474 systemd[1]: Started /usr/bin/podman healthcheck run 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882.
Dec  7 05:00:04 np0005549474 podman[237305]: 2025-12-07 10:00:04.106013941 +0000 UTC m=+0.109319392 container init 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  7 05:00:04 np0005549474 multipathd[237320]: + sudo -E kolla_set_configs
Dec  7 05:00:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:04 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:04 np0005549474 podman[237305]: 2025-12-07 10:00:04.144274787 +0000 UTC m=+0.147580238 container start 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  7 05:00:04 np0005549474 podman[237305]: multipathd
Dec  7 05:00:04 np0005549474 systemd[1]: Started multipathd container.
Dec  7 05:00:04 np0005549474 multipathd[237320]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  7 05:00:04 np0005549474 multipathd[237320]: INFO:__main__:Validating config file
Dec  7 05:00:04 np0005549474 multipathd[237320]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  7 05:00:04 np0005549474 multipathd[237320]: INFO:__main__:Writing out command to execute
Dec  7 05:00:04 np0005549474 multipathd[237320]: ++ cat /run_command
Dec  7 05:00:04 np0005549474 multipathd[237320]: + CMD='/usr/sbin/multipathd -d'
Dec  7 05:00:04 np0005549474 multipathd[237320]: + ARGS=
Dec  7 05:00:04 np0005549474 multipathd[237320]: + sudo kolla_copy_cacerts
Dec  7 05:00:04 np0005549474 podman[237327]: 2025-12-07 10:00:04.211425656 +0000 UTC m=+0.059540184 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  7 05:00:04 np0005549474 multipathd[237320]: + [[ ! -n '' ]]
Dec  7 05:00:04 np0005549474 multipathd[237320]: + . kolla_extend_start
Dec  7 05:00:04 np0005549474 multipathd[237320]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  7 05:00:04 np0005549474 multipathd[237320]: + umask 0022
Dec  7 05:00:04 np0005549474 multipathd[237320]: + exec /usr/sbin/multipathd -d
Dec  7 05:00:04 np0005549474 multipathd[237320]: Running command: '/usr/sbin/multipathd -d'
Dec  7 05:00:04 np0005549474 systemd[1]: 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882-18a7a5ad5634a23a.service: Main process exited, code=exited, status=1/FAILURE
Dec  7 05:00:04 np0005549474 systemd[1]: 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882-18a7a5ad5634a23a.service: Failed with result 'exit-code'.
Dec  7 05:00:04 np0005549474 multipathd[237320]: 3566.941694 | --------start up--------
Dec  7 05:00:04 np0005549474 multipathd[237320]: 3566.941716 | read /etc/multipath.conf
Dec  7 05:00:04 np0005549474 multipathd[237320]: 3566.947702 | path checkers start up
Dec  7 05:00:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:00:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:05 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:05 np0005549474 python3.9[237511]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:00:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:05 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:05.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:06 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:06 np0005549474 python3.9[237666]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:06.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:00:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:07 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:07.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:00:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:07.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:00:07 np0005549474 python3.9[237833]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 05:00:07 np0005549474 systemd[1]: Stopping multipathd container...
Dec  7 05:00:07 np0005549474 multipathd[237320]: 3569.939750 | exit (signal)
Dec  7 05:00:07 np0005549474 multipathd[237320]: 3569.939809 | --------shut down-------
Dec  7 05:00:07 np0005549474 systemd[1]: libpod-76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882.scope: Deactivated successfully.
Dec  7 05:00:07 np0005549474 podman[237843]: 2025-12-07 10:00:07.257779944 +0000 UTC m=+0.071626730 container died 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:00:07 np0005549474 podman[237835]: 2025-12-07 10:00:07.266951363 +0000 UTC m=+0.096507215 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  7 05:00:07 np0005549474 systemd[1]: 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882-18a7a5ad5634a23a.timer: Deactivated successfully.
Dec  7 05:00:07 np0005549474 systemd[1]: Stopped /usr/bin/podman healthcheck run 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882.
Dec  7 05:00:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882-userdata-shm.mount: Deactivated successfully.
Dec  7 05:00:07 np0005549474 systemd[1]: var-lib-containers-storage-overlay-df22171396f28738df5d4e90238ae9985aee9622fa4c78356785f039a1e697ef-merged.mount: Deactivated successfully.
Dec  7 05:00:07 np0005549474 podman[237843]: 2025-12-07 10:00:07.649825721 +0000 UTC m=+0.463672487 container cleanup 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  7 05:00:07 np0005549474 podman[237843]: multipathd
Dec  7 05:00:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:07 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:07 np0005549474 podman[237894]: multipathd
Dec  7 05:00:07 np0005549474 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  7 05:00:07 np0005549474 systemd[1]: Stopped multipathd container.
Dec  7 05:00:07 np0005549474 systemd[1]: Starting multipathd container...
Dec  7 05:00:07 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df22171396f28738df5d4e90238ae9985aee9622fa4c78356785f039a1e697ef/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:07 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df22171396f28738df5d4e90238ae9985aee9622fa4c78356785f039a1e697ef/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:07 np0005549474 systemd[1]: Started /usr/bin/podman healthcheck run 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882.
Dec  7 05:00:07 np0005549474 podman[237907]: 2025-12-07 10:00:07.823385001 +0000 UTC m=+0.095765994 container init 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  7 05:00:07 np0005549474 multipathd[237922]: + sudo -E kolla_set_configs
Dec  7 05:00:07 np0005549474 podman[237907]: 2025-12-07 10:00:07.878874994 +0000 UTC m=+0.151255977 container start 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  7 05:00:07 np0005549474 podman[237907]: multipathd
Dec  7 05:00:07 np0005549474 systemd[1]: Started multipathd container.
Dec  7 05:00:07 np0005549474 podman[237929]: 2025-12-07 10:00:07.934647595 +0000 UTC m=+0.047521508 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  7 05:00:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:07 np0005549474 systemd[1]: 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882-68886238195de3ee.service: Main process exited, code=exited, status=1/FAILURE
Dec  7 05:00:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:07.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:07 np0005549474 systemd[1]: 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882-68886238195de3ee.service: Failed with result 'exit-code'.
Dec  7 05:00:07 np0005549474 multipathd[237922]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  7 05:00:07 np0005549474 multipathd[237922]: INFO:__main__:Validating config file
Dec  7 05:00:07 np0005549474 multipathd[237922]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  7 05:00:07 np0005549474 multipathd[237922]: INFO:__main__:Writing out command to execute
Dec  7 05:00:07 np0005549474 multipathd[237922]: ++ cat /run_command
Dec  7 05:00:07 np0005549474 multipathd[237922]: + CMD='/usr/sbin/multipathd -d'
Dec  7 05:00:07 np0005549474 multipathd[237922]: + ARGS=
Dec  7 05:00:07 np0005549474 multipathd[237922]: + sudo kolla_copy_cacerts
Dec  7 05:00:07 np0005549474 multipathd[237922]: + [[ ! -n '' ]]
Dec  7 05:00:07 np0005549474 multipathd[237922]: + . kolla_extend_start
Dec  7 05:00:07 np0005549474 multipathd[237922]: Running command: '/usr/sbin/multipathd -d'
Dec  7 05:00:07 np0005549474 multipathd[237922]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  7 05:00:07 np0005549474 multipathd[237922]: + umask 0022
Dec  7 05:00:07 np0005549474 multipathd[237922]: + exec /usr/sbin/multipathd -d
Dec  7 05:00:07 np0005549474 multipathd[237922]: 3570.699079 | --------start up--------
Dec  7 05:00:07 np0005549474 multipathd[237922]: 3570.699100 | read /etc/multipath.conf
Dec  7 05:00:07 np0005549474 multipathd[237922]: 3570.703898 | path checkers start up
Dec  7 05:00:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100007 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:00:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:08 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:08 np0005549474 python3.9[238112]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:00:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:09 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:09 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:09 np0005549474 python3.9[238266]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  7 05:00:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:09.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:09] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:00:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:09] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:00:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:10 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:00:10 np0005549474 python3.9[238418]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  7 05:00:10 np0005549474 kernel: Key type psk registered
Dec  7 05:00:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:11 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:11 np0005549474 podman[238557]: 2025-12-07 10:00:11.653027972 +0000 UTC m=+0.072221196 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  7 05:00:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:11 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:11 np0005549474 python3.9[238596]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:00:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:11.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:12 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:12 np0005549474 python3.9[238729]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101611.2733943-1850-237683029953118/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:00:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:00:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:12.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:00:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:13 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:13 np0005549474 python3.9[238882]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:13 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:13.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:14 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:14 np0005549474 python3.9[239037]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 05:00:14 np0005549474 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  7 05:00:14 np0005549474 systemd[1]: Stopped Load Kernel Modules.
Dec  7 05:00:14 np0005549474 systemd[1]: Stopping Load Kernel Modules...
Dec  7 05:00:14 np0005549474 systemd[1]: Starting Load Kernel Modules...
Dec  7 05:00:14 np0005549474 systemd[1]: Finished Load Kernel Modules.
Dec  7 05:00:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:00:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:15 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:15 np0005549474 python3.9[239194]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 05:00:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:15 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:15.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:16 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:16 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:00:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:00:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:17 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:17.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:00:17 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:17 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:17 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:17 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:17 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:17 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:17 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:17.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:18 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:18 np0005549474 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  7 05:00:18 np0005549474 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  7 05:00:18 np0005549474 lvm[239313]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:00:18 np0005549474 lvm[239313]: VG ceph_vg0 finished
Dec  7 05:00:18 np0005549474 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 05:00:18 np0005549474 systemd[1]: Starting man-db-cache-update.service...
Dec  7 05:00:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:18.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:18 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:18 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:18 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:00:18 np0005549474 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 05:00:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:19 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:19 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:00:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:19 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:00:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:19 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:19 np0005549474 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 05:00:19 np0005549474 systemd[1]: Finished man-db-cache-update.service.
Dec  7 05:00:19 np0005549474 systemd[1]: man-db-cache-update.service: Consumed 1.429s CPU time.
Dec  7 05:00:19 np0005549474 systemd[1]: run-r107bc80cf5bc4367961d8abffc25ae3f.service: Deactivated successfully.
Dec  7 05:00:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:19.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:00:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:00:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:20 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:20 np0005549474 python3.9[240679]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 05:00:20 np0005549474 systemd[1]: Stopping Open-iSCSI...
Dec  7 05:00:20 np0005549474 iscsid[227920]: iscsid shutting down.
Dec  7 05:00:20 np0005549474 systemd[1]: iscsid.service: Deactivated successfully.
Dec  7 05:00:20 np0005549474 systemd[1]: Stopped Open-iSCSI.
Dec  7 05:00:20 np0005549474 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  7 05:00:20 np0005549474 systemd[1]: Starting Open-iSCSI...
Dec  7 05:00:20 np0005549474 systemd[1]: Started Open-iSCSI.
Dec  7 05:00:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:00:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:21 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff40089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:21 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff40089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:21 np0005549474 python3.9[240835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 05:00:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:21.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:22 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:22.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:22 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:00:22 np0005549474 python3.9[240991]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:00:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:23 np0005549474 python3.9[241145]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 05:00:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:23.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:23 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:24 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:24 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:24 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff40089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:00:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:25 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:25 np0005549474 python3.9[241331]: ansible-ansible.builtin.service_facts Invoked
Dec  7 05:00:25 np0005549474 network[241349]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 05:00:25 np0005549474 network[241350]: 'network-scripts' will be removed from distribution in near future.
Dec  7 05:00:25 np0005549474 network[241351]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 05:00:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:25 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:25.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:26 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:26.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:00:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:27 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:27.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:00:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:27.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:00:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:00:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:00:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:27 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:27.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100028 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:00:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:28 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:28.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:00:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:29 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:29 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:29] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 05:00:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:29] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 05:00:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:30 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:00:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:31 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:31 np0005549474 python3.9[241631]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:31 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:32 np0005549474 python3.9[241785]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:32 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:00:32 np0005549474 python3.9[241938]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:33 np0005549474 python3.9[242093]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:33.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:34 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:34.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:34 np0005549474 python3.9[242246]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:00:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:35 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:35 np0005549474 python3.9[242400]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:35 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:35.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:36 np0005549474 python3.9[242554]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:36 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:36.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:00:36 np0005549474 python3.9[242709]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:00:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:37 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:37.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:00:37 np0005549474 ceph-osd[83033]: bluestore.MempoolThread fragmentation_score=0.000029 took=0.000052s
Dec  7 05:00:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:37 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:37.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:38 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:38 np0005549474 podman[242812]: 2025-12-07 10:00:38.263655288 +0000 UTC m=+0.075826284 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 05:00:38 np0005549474 podman[242813]: 2025-12-07 10:00:38.285826619 +0000 UTC m=+0.098052427 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  7 05:00:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:38.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:38 np0005549474 python3.9[242957]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:00:38.612 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:00:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:00:38.612 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:00:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:00:38.613 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:00:38 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:00:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:39 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:39 np0005549474 python3.9[243191]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:39 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:00:39 np0005549474 python3.9[243412]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:39 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:39 np0005549474 podman[243504]: 2025-12-07 10:00:39.933124759 +0000 UTC m=+0.037714122 container create fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:00:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:39 np0005549474 systemd[1]: Started libpod-conmon-fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d.scope.
Dec  7 05:00:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:39] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 05:00:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:39] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec  7 05:00:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:40 np0005549474 podman[243504]: 2025-12-07 10:00:39.91723265 +0000 UTC m=+0.021822023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:00:40 np0005549474 podman[243504]: 2025-12-07 10:00:40.013055324 +0000 UTC m=+0.117644697 container init fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bell, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:00:40 np0005549474 podman[243504]: 2025-12-07 10:00:40.019844128 +0000 UTC m=+0.124433481 container start fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 05:00:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100040 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:00:40 np0005549474 podman[243504]: 2025-12-07 10:00:40.022951532 +0000 UTC m=+0.127540905 container attach fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 05:00:40 np0005549474 exciting_bell[243565]: 167 167
Dec  7 05:00:40 np0005549474 systemd[1]: libpod-fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d.scope: Deactivated successfully.
Dec  7 05:00:40 np0005549474 podman[243504]: 2025-12-07 10:00:40.024788152 +0000 UTC m=+0.129377505 container died fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 05:00:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6e1359d67ab6781501acf17109d1d08f60c1f56ee3164e29ada382e9ac44c48e-merged.mount: Deactivated successfully.
Dec  7 05:00:40 np0005549474 podman[243504]: 2025-12-07 10:00:40.060657024 +0000 UTC m=+0.165246387 container remove fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:00:40 np0005549474 systemd[1]: libpod-conmon-fccbca839987ee6fa6f19cfae126083781c80b9a311acff42deb09794471203d.scope: Deactivated successfully.
Dec  7 05:00:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:40 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.224897061 +0000 UTC m=+0.044673610 container create d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:00:40 np0005549474 systemd[1]: Started libpod-conmon-d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305.scope.
Dec  7 05:00:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0de5bab873166c259c82bcf483f1311891e58cbb6ff162abb3bde7b8d997cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0de5bab873166c259c82bcf483f1311891e58cbb6ff162abb3bde7b8d997cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0de5bab873166c259c82bcf483f1311891e58cbb6ff162abb3bde7b8d997cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0de5bab873166c259c82bcf483f1311891e58cbb6ff162abb3bde7b8d997cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:40 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0de5bab873166c259c82bcf483f1311891e58cbb6ff162abb3bde7b8d997cf0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.203895442 +0000 UTC m=+0.023672021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.328403954 +0000 UTC m=+0.148180533 container init d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.335081855 +0000 UTC m=+0.154858414 container start d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.346155875 +0000 UTC m=+0.165932464 container attach d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:00:40 np0005549474 python3.9[243655]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:40.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:40 np0005549474 nifty_kilby[243689]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:00:40 np0005549474 nifty_kilby[243689]: --> All data devices are unavailable
Dec  7 05:00:40 np0005549474 systemd[1]: libpod-d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305.scope: Deactivated successfully.
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.663089118 +0000 UTC m=+0.482865677 container died d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kilby, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:00:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d0de5bab873166c259c82bcf483f1311891e58cbb6ff162abb3bde7b8d997cf0-merged.mount: Deactivated successfully.
Dec  7 05:00:40 np0005549474 podman[243653]: 2025-12-07 10:00:40.711503209 +0000 UTC m=+0.531279768 container remove d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 05:00:40 np0005549474 systemd[1]: libpod-conmon-d260f8bffaa48963baa1cde62a4febbdc8be47d5ef80b46e7a92779879f13305.scope: Deactivated successfully.
Dec  7 05:00:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:00:41 np0005549474 python3.9[243894]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:41 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.130381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641130440, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1380, "num_deletes": 254, "total_data_size": 2663590, "memory_usage": 2706608, "flush_reason": "Manual Compaction"}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641151584, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2600372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17761, "largest_seqno": 19140, "table_properties": {"data_size": 2593844, "index_size": 3727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 12972, "raw_average_key_size": 18, "raw_value_size": 2580968, "raw_average_value_size": 3778, "num_data_blocks": 165, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101506, "oldest_key_time": 1765101506, "file_creation_time": 1765101641, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 21245 microseconds, and 6863 cpu microseconds.
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.151630) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2600372 bytes OK
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.151650) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.153012) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.153032) EVENT_LOG_v1 {"time_micros": 1765101641153027, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.153050) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2657588, prev total WAL file size 2657588, number of live WAL files 2.
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.153952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2539KB)], [38(11MB)]
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641153983, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14580074, "oldest_snapshot_seqno": -1}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5091 keys, 14076540 bytes, temperature: kUnknown
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641318700, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 14076540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14040696, "index_size": 22038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 129304, "raw_average_key_size": 25, "raw_value_size": 13946652, "raw_average_value_size": 2739, "num_data_blocks": 906, "num_entries": 5091, "num_filter_entries": 5091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765101641, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.318911) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14076540 bytes
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.320672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.5 rd, 85.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.4 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(11.0) write-amplify(5.4) OK, records in: 5615, records dropped: 524 output_compression: NoCompression
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.320687) EVENT_LOG_v1 {"time_micros": 1765101641320679, "job": 18, "event": "compaction_finished", "compaction_time_micros": 164794, "compaction_time_cpu_micros": 33663, "output_level": 6, "num_output_files": 1, "total_output_size": 14076540, "num_input_records": 5615, "num_output_records": 5091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641321118, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.321226431 +0000 UTC m=+0.059568924 container create e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641322779, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.153866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.322816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.322821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.322823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.322825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:00:41 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:00:41.322827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:00:41 np0005549474 systemd[1]: Started libpod-conmon-e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755.scope.
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.288180046 +0000 UTC m=+0.026522559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:00:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.403729615 +0000 UTC m=+0.142072108 container init e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.410070568 +0000 UTC m=+0.148413041 container start e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.413065969 +0000 UTC m=+0.151408452 container attach e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:00:41 np0005549474 nervous_thompson[244083]: 167 167
Dec  7 05:00:41 np0005549474 systemd[1]: libpod-e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755.scope: Deactivated successfully.
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.41904071 +0000 UTC m=+0.157383273 container died e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_thompson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:00:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7ca498d72192c7730947268cebce536b6a80271e26168384c2ea64840b50e2ea-merged.mount: Deactivated successfully.
Dec  7 05:00:41 np0005549474 podman[244037]: 2025-12-07 10:00:41.456600407 +0000 UTC m=+0.194942880 container remove e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_thompson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:00:41 np0005549474 systemd[1]: libpod-conmon-e766b4437285c435209de3dbd5869c264d0fb9b129a4b6b3555723f9189e5755.scope: Deactivated successfully.
Dec  7 05:00:41 np0005549474 podman[244156]: 2025-12-07 10:00:41.598156351 +0000 UTC m=+0.034767303 container create 8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 05:00:41 np0005549474 systemd[1]: Started libpod-conmon-8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d.scope.
Dec  7 05:00:41 np0005549474 python3.9[244150]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c11f58991631a572c79f1a95d5b77bf528767387840888ddfb562d433c7381/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c11f58991631a572c79f1a95d5b77bf528767387840888ddfb562d433c7381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c11f58991631a572c79f1a95d5b77bf528767387840888ddfb562d433c7381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c11f58991631a572c79f1a95d5b77bf528767387840888ddfb562d433c7381/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:41 np0005549474 podman[244156]: 2025-12-07 10:00:41.583619648 +0000 UTC m=+0.020230630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:00:41 np0005549474 podman[244156]: 2025-12-07 10:00:41.689390731 +0000 UTC m=+0.126001753 container init 8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_brattain, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:00:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:41 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:41 np0005549474 podman[244156]: 2025-12-07 10:00:41.701114729 +0000 UTC m=+0.137725681 container start 8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec  7 05:00:41 np0005549474 podman[244156]: 2025-12-07 10:00:41.704145522 +0000 UTC m=+0.140756494 container attach 8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  7 05:00:41 np0005549474 podman[244175]: 2025-12-07 10:00:41.747394302 +0000 UTC m=+0.062071132 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  7 05:00:41 np0005549474 brave_brattain[244172]: {
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:    "0": [
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:        {
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "devices": [
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "/dev/loop3"
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            ],
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "lv_name": "ceph_lv0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "lv_size": "21470642176",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "name": "ceph_lv0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "tags": {
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.cluster_name": "ceph",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.crush_device_class": "",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.encrypted": "0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.osd_id": "0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.type": "block",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.vdo": "0",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:                "ceph.with_tpm": "0"
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            },
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "type": "block",
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:            "vg_name": "ceph_vg0"
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:        }
Dec  7 05:00:41 np0005549474 brave_brattain[244172]:    ]
Dec  7 05:00:41 np0005549474 brave_brattain[244172]: }
Dec  7 05:00:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:41.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:42 np0005549474 podman[244156]: 2025-12-07 10:00:42.003445886 +0000 UTC m=+0.440056858 container died 8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:00:42 np0005549474 systemd[1]: libpod-8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d.scope: Deactivated successfully.
Dec  7 05:00:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-20c11f58991631a572c79f1a95d5b77bf528767387840888ddfb562d433c7381-merged.mount: Deactivated successfully.
Dec  7 05:00:42 np0005549474 podman[244156]: 2025-12-07 10:00:42.046781171 +0000 UTC m=+0.483392163 container remove 8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 05:00:42 np0005549474 systemd[1]: libpod-conmon-8b0ace51b67c64d433b9cad8063a40f771ec9368ba7110a80f0954f9a02f6d6d.scope: Deactivated successfully.
Dec  7 05:00:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:42 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:00:42
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.nfs', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', '.mgr']
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:00:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:00:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:00:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:42.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:42 np0005549474 python3.9[244413]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.654665864 +0000 UTC m=+0.039940273 container create 1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:00:42 np0005549474 systemd[1]: Started libpod-conmon-1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf.scope.
Dec  7 05:00:42 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.731285038 +0000 UTC m=+0.116559467 container init 1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.636753928 +0000 UTC m=+0.022028357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.739043788 +0000 UTC m=+0.124318197 container start 1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_engelbart, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.741720551 +0000 UTC m=+0.126994980 container attach 1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 05:00:42 np0005549474 festive_engelbart[244519]: 167 167
Dec  7 05:00:42 np0005549474 systemd[1]: libpod-1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf.scope: Deactivated successfully.
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.744681881 +0000 UTC m=+0.129956290 container died 1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:00:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b4b9ae7bbc973c2b030c271f5b3ebc810babc8b3160b426cfdbfc87119181f30-merged.mount: Deactivated successfully.
Dec  7 05:00:42 np0005549474 podman[244479]: 2025-12-07 10:00:42.774959821 +0000 UTC m=+0.160234230 container remove 1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_engelbart, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec  7 05:00:42 np0005549474 systemd[1]: libpod-conmon-1bdcf869012015c3819daafa0e4d56a00b9be6f3db2d3cabb979074409294ccf.scope: Deactivated successfully.
Dec  7 05:00:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:00:42 np0005549474 podman[244613]: 2025-12-07 10:00:42.986325405 +0000 UTC m=+0.074760445 container create 7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:00:43 np0005549474 systemd[1]: Started libpod-conmon-7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac.scope.
Dec  7 05:00:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:43 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:43 np0005549474 podman[244613]: 2025-12-07 10:00:42.956727074 +0000 UTC m=+0.045162164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:00:43 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:00:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c23450fedc8ce14ac754d226fb1332dcd55a916436faf26c4f39970ab4d89c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c23450fedc8ce14ac754d226fb1332dcd55a916436faf26c4f39970ab4d89c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c23450fedc8ce14ac754d226fb1332dcd55a916436faf26c4f39970ab4d89c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c23450fedc8ce14ac754d226fb1332dcd55a916436faf26c4f39970ab4d89c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:00:43 np0005549474 podman[244613]: 2025-12-07 10:00:43.096380036 +0000 UTC m=+0.184815036 container init 7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  7 05:00:43 np0005549474 podman[244613]: 2025-12-07 10:00:43.102618765 +0000 UTC m=+0.191053765 container start 7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 05:00:43 np0005549474 podman[244613]: 2025-12-07 10:00:43.10653366 +0000 UTC m=+0.194968660 container attach 7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:00:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:00:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7653 writes, 30K keys, 7653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7653 writes, 1575 syncs, 4.86 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 765 writes, 1338 keys, 765 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 765 writes, 379 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  7 05:00:43 np0005549474 python3.9[244661]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:43 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:43 np0005549474 lvm[244763]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:00:43 np0005549474 lvm[244763]: VG ceph_vg0 finished
Dec  7 05:00:43 np0005549474 dreamy_saha[244664]: {}
Dec  7 05:00:43 np0005549474 systemd[1]: libpod-7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac.scope: Deactivated successfully.
Dec  7 05:00:43 np0005549474 podman[244613]: 2025-12-07 10:00:43.813863356 +0000 UTC m=+0.902298396 container died 7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:00:43 np0005549474 systemd[1]: libpod-7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac.scope: Consumed 1.114s CPU time.
Dec  7 05:00:43 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a8c23450fedc8ce14ac754d226fb1332dcd55a916436faf26c4f39970ab4d89c-merged.mount: Deactivated successfully.
Dec  7 05:00:43 np0005549474 podman[244613]: 2025-12-07 10:00:43.874423976 +0000 UTC m=+0.962859016 container remove 7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:00:43 np0005549474 systemd[1]: libpod-conmon-7daec2443a0625f3d7c45f9c7fb935e2628d6c7e25d7a6362a07b97d559cc9ac.scope: Deactivated successfully.
Dec  7 05:00:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:00:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:00:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:43.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:44 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:00:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:44 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:44.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 05:00:44 np0005549474 python3.9[244932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:45 np0005549474 python3.9[245086]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:45.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:46 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:46 np0005549474 python3.9[245238]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:46.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:00:46 np0005549474 python3.9[245391]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:47 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:47.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:00:47 np0005549474 python3.9[245544]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:47 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:47.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:48 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:48 np0005549474 python3.9[245696]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:48.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:48 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:00:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:00:48 np0005549474 python3.9[245849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:49 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:49 np0005549474 python3.9[246003]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:00:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:49 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:49] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 05:00:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:49] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec  7 05:00:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:50 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:50.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:00:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:51 np0005549474 python3.9[246156]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:00:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:00:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:52 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:52 np0005549474 python3.9[246309]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 05:00:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:00:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:53 np0005549474 python3.9[246462]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 05:00:53 np0005549474 systemd[1]: Reloading.
Dec  7 05:00:53 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:00:53 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:00:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:54 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:54 np0005549474 python3.9[246650]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:54.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:54 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:00:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:00:55 np0005549474 python3.9[246804]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:55 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4002300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:55 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:55 np0005549474 python3.9[246958]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:55.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:00:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:56 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:56 np0005549474 python3.9[247111]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:00:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:57 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:57.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:00:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:00:57.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:00:57 np0005549474 python3.9[247267]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:00:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:00:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:57 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4002300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:57 np0005549474 python3.9[247421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:00:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:57.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:00:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:58 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:58 np0005549474 python3.9[247574]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:00:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:00:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:00:58 np0005549474 python3.9[247727]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 05:00:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:59 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:00:59 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:00:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:00:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:00:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:59.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:00:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:59] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:00:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:00:59] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:01:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100100 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:01:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:00 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4002300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:01:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:01 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:01 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:01.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:02 np0005549474 python3.9[247920]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:02 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:02.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:02 np0005549474 python3.9[248072]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:01:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:03 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4002300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:03 np0005549474 python3.9[248225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:03 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:03.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:04 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:04 np0005549474 python3.9[248378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:04.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:01:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:05 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:05 np0005549474 python3.9[248531]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:05 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:05 np0005549474 python3.9[248684]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:05.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:06 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:06 np0005549474 python3.9[248836]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:07 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:07.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:01:07 np0005549474 python3.9[248989]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:07 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:07 np0005549474 python3.9[249142]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:07.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:08 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:08 np0005549474 podman[249266]: 2025-12-07 10:01:08.484248129 +0000 UTC m=+0.066785190 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  7 05:01:08 np0005549474 podman[249267]: 2025-12-07 10:01:08.532210027 +0000 UTC m=+0.110073681 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  7 05:01:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:08.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:08 np0005549474 python3.9[249327]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:09 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:09 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:09] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:01:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:09] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:01:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:10.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:10 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:11 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:11 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100112 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:01:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:12 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:12 np0005549474 podman[249367]: 2025-12-07 10:01:12.282575972 +0000 UTC m=+0.096230417 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:01:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:01:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:01:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:01:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:13 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:13 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:14.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:14 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:14 np0005549474 python3.9[249517]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  7 05:01:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:01:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:15 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc4003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:15 np0005549474 python3.9[249674]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 05:01:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:15 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:16.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:16 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:16.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:16 np0005549474 python3.9[249832]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  7 05:01:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:01:16 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:01:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:17 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:17.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:01:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:17 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:17 np0005549474 systemd-logind[796]: New session 55 of user zuul.
Dec  7 05:01:18 np0005549474 systemd[1]: Started Session 55 of User zuul.
Dec  7 05:01:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:18.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:18 np0005549474 systemd[1]: session-55.scope: Deactivated successfully.
Dec  7 05:01:18 np0005549474 systemd-logind[796]: Session 55 logged out. Waiting for processes to exit.
Dec  7 05:01:18 np0005549474 systemd-logind[796]: Removed session 55.
Dec  7 05:01:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:18 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:18.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:18 np0005549474 python3.9[250021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:01:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:19 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:19 np0005549474 python3.9[250144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101678.3518548-3433-219818019141397/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:19 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:19] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 05:01:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:19] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 05:01:20 np0005549474 python3.9[250295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:20 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:20 np0005549474 python3.9[250371]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:20 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:01:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:21 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:21 np0005549474 python3.9[250547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:21 np0005549474 python3.9[250669]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101680.5945706-3433-62823553876443/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:21 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:22.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:22 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:22 np0005549474 python3.9[250819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:01:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:22.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:01:22 np0005549474 python3.9[250940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101681.8663826-3433-70348572108469/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:23 np0005549474 python3.9[251092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:01:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:01:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:01:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:23 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:23 np0005549474 python3.9[251213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101682.9936264-3433-24220218808108/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:24.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:24 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:24 np0005549474 python3.9[251365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:01:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:25 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:25 np0005549474 python3.9[251487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101684.0352187-3433-93760947216239/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:25 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:26.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:26 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:26 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:01:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:26.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:01:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:27 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:27.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:01:27 np0005549474 python3.9[251641]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:01:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:01:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:01:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:27 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:27 np0005549474 python3.9[251794]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:01:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:28.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:28 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:28.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:28 np0005549474 python3.9[251946]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:01:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:01:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:29 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:29 np0005549474 python3.9[252100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:29 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:29] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:01:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:29] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:01:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:30.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:30 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:30 np0005549474 python3.9[252223]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765101689.2278602-3754-139865467690788/.source _original_basename=.6l5g0ika follow=False checksum=c67964a81235804fcdc5f20ab8b5244396627f04 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  7 05:01:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:01:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:31 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:31 np0005549474 python3.9[252377]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:01:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:31 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100132 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:01:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:32.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:32 np0005549474 python3.9[252529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:32 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:32.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:01:32 np0005549474 python3.9[252650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101691.7877886-3832-250277295352790/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=81f1f28d070b2613355f782b83a5777fdba9540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:33 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:33 np0005549474 python3.9[252802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 05:01:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:34.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:34 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:34 np0005549474 python3.9[252923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101693.3593748-3877-244515315620961/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=2efe6ae78bce1c26d2c384be079fa366810076ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 05:01:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:34.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:01:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:35 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:35 np0005549474 python3.9[253077]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  7 05:01:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:35 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:36.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:36 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:36.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:36 np0005549474 python3.9[253229]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  7 05:01:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:01:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:37 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:37.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:01:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:37 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:37 np0005549474 python3[253383]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  7 05:01:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:38.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:38 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:38.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:01:38.613 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:01:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:01:38.613 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:01:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:01:38.614 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:01:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:39 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:39 np0005549474 podman[253421]: 2025-12-07 10:01:39.248043438 +0000 UTC m=+0.063272165 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:01:39 np0005549474 podman[253422]: 2025-12-07 10:01:39.320738856 +0000 UTC m=+0.134344309 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec  7 05:01:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:39 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:39] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:01:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:39] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:01:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:40 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:40.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:41 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:41 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:42.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:42 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:01:42
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'vms', '.mgr', '.nfs', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:01:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:01:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:01:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:42.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:01:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:01:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:43 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfc80046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:43 np0005549474 podman[253518]: 2025-12-07 10:01:43.313861334 +0000 UTC m=+0.120891314 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:01:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:43 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:44.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:44 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:44.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:01:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:45 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc002070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:46.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:46 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfb4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:01:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:47.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:01:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:47.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:01:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:47.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:01:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:47 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:01:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:47 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff4009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:48.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:48 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc002070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:01:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:48.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:01:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:01:48 np0005549474 podman[253397]: 2025-12-07 10:01:48.972692941 +0000 UTC m=+11.047777517 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec  7 05:01:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:49 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfb40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:49 np0005549474 podman[253673]: 2025-12-07 10:01:49.132241472 +0000 UTC m=+0.049866842 container create 0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible)
Dec  7 05:01:49 np0005549474 podman[253673]: 2025-12-07 10:01:49.106544266 +0000 UTC m=+0.024169656 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec  7 05:01:49 np0005549474 python3[253383]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:49 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:01:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:01:49 np0005549474 python3.9[253865]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:01:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:49] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 05:01:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:49] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  7 05:01:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:50.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.229379544 +0000 UTC m=+0.035991856 container create a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 05:01:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:50 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff400ab70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:50 np0005549474 systemd[1]: Started libpod-conmon-a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2.scope.
Dec  7 05:01:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.214211373 +0000 UTC m=+0.020823695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.316636077 +0000 UTC m=+0.123248409 container init a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.328466617 +0000 UTC m=+0.135078919 container start a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.334001207 +0000 UTC m=+0.140613509 container attach a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 05:01:50 np0005549474 xenodochial_bhaskara[254001]: 167 167
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.339558017 +0000 UTC m=+0.146170319 container died a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:01:50 np0005549474 systemd[1]: libpod-a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2.scope: Deactivated successfully.
Dec  7 05:01:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-36d77a60ad993740ad39fb92994cc18565351f5e383de14cddcdfe318306af04-merged.mount: Deactivated successfully.
Dec  7 05:01:50 np0005549474 podman[253985]: 2025-12-07 10:01:50.378748459 +0000 UTC m=+0.185360761 container remove a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bhaskara, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:01:50 np0005549474 systemd[1]: libpod-conmon-a123bd04f8ad1612288cffe3f5ebffc8c094f1b91d2bb4dc4f87e44c535259b2.scope: Deactivated successfully.
Dec  7 05:01:50 np0005549474 podman[254024]: 2025-12-07 10:01:50.578279812 +0000 UTC m=+0.040586220 container create af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_solomon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 05:01:50 np0005549474 systemd[1]: Started libpod-conmon-af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689.scope.
Dec  7 05:01:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:01:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:01:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf45322a60dc60743ed71f7bbb01be9e6600edd641a5e752c55c9156bd90ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf45322a60dc60743ed71f7bbb01be9e6600edd641a5e752c55c9156bd90ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf45322a60dc60743ed71f7bbb01be9e6600edd641a5e752c55c9156bd90ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf45322a60dc60743ed71f7bbb01be9e6600edd641a5e752c55c9156bd90ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf45322a60dc60743ed71f7bbb01be9e6600edd641a5e752c55c9156bd90ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:50 np0005549474 podman[254024]: 2025-12-07 10:01:50.562053322 +0000 UTC m=+0.024359750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:01:50 np0005549474 podman[254024]: 2025-12-07 10:01:50.661334272 +0000 UTC m=+0.123640700 container init af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:01:50 np0005549474 podman[254024]: 2025-12-07 10:01:50.67494364 +0000 UTC m=+0.137250058 container start af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 05:01:50 np0005549474 podman[254024]: 2025-12-07 10:01:50.681069076 +0000 UTC m=+0.143375514 container attach af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:01:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:51 np0005549474 confident_solomon[254041]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:01:51 np0005549474 confident_solomon[254041]: --> All data devices are unavailable
Dec  7 05:01:51 np0005549474 systemd[1]: libpod-af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689.scope: Deactivated successfully.
Dec  7 05:01:51 np0005549474 podman[254024]: 2025-12-07 10:01:51.044422466 +0000 UTC m=+0.506728874 container died af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_solomon, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:01:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4caf45322a60dc60743ed71f7bbb01be9e6600edd641a5e752c55c9156bd90ca-merged.mount: Deactivated successfully.
Dec  7 05:01:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc002070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:51 np0005549474 podman[254024]: 2025-12-07 10:01:51.088067988 +0000 UTC m=+0.550374396 container remove af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 05:01:51 np0005549474 systemd[1]: libpod-conmon-af1809a4e95631430b64aa5f61ac9a152406712c774ef3497a5d8f6485445689.scope: Deactivated successfully.
Dec  7 05:01:51 np0005549474 python3.9[254197]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.601490353 +0000 UTC m=+0.060547842 container create 0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:01:51 np0005549474 systemd[1]: Started libpod-conmon-0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9.scope.
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.566119984 +0000 UTC m=+0.025177493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:01:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.680497012 +0000 UTC m=+0.139554501 container init 0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hypatia, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.686129314 +0000 UTC m=+0.145186793 container start 0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hypatia, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.688787466 +0000 UTC m=+0.147844955 container attach 0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:01:51 np0005549474 systemd[1]: libpod-0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9.scope: Deactivated successfully.
Dec  7 05:01:51 np0005549474 adoring_hypatia[254329]: 167 167
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.692567009 +0000 UTC m=+0.151624498 container died 0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 05:01:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3929f6342faec685d1b009a16d8a771bf51fae1cc6281cc05953d5327ba8a7ea-merged.mount: Deactivated successfully.
Dec  7 05:01:51 np0005549474 podman[254313]: 2025-12-07 10:01:51.738634536 +0000 UTC m=+0.197692025 container remove 0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 05:01:51 np0005549474 systemd[1]: libpod-conmon-0f0adf76c210e97ce16c79898958bd84497bd72c6f2b337d806a26730e7e68e9.scope: Deactivated successfully.
Dec  7 05:01:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:51 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfb40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:51 np0005549474 podman[254408]: 2025-12-07 10:01:51.886326385 +0000 UTC m=+0.042032319 container create 132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_knuth, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 05:01:51 np0005549474 systemd[1]: Started libpod-conmon-132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b.scope.
Dec  7 05:01:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:51 np0005549474 podman[254408]: 2025-12-07 10:01:51.867981599 +0000 UTC m=+0.023687543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:01:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5dedb5d8e517a454e948c4557626b5c3e883022ad5550f7a8fb8e2ebc99a6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5dedb5d8e517a454e948c4557626b5c3e883022ad5550f7a8fb8e2ebc99a6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5dedb5d8e517a454e948c4557626b5c3e883022ad5550f7a8fb8e2ebc99a6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5dedb5d8e517a454e948c4557626b5c3e883022ad5550f7a8fb8e2ebc99a6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:52.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:52 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:52.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:01:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff400ab70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:53 np0005549474 podman[254408]: 2025-12-07 10:01:53.190669399 +0000 UTC m=+1.346375353 container init 132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_knuth, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:01:53 np0005549474 podman[254408]: 2025-12-07 10:01:53.201179134 +0000 UTC m=+1.356885068 container start 132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_knuth, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 05:01:53 np0005549474 podman[254408]: 2025-12-07 10:01:53.204684769 +0000 UTC m=+1.360390723 container attach 132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_knuth, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec  7 05:01:53 np0005549474 python3.9[254504]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]: {
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:    "0": [
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:        {
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "devices": [
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "/dev/loop3"
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            ],
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "lv_name": "ceph_lv0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "lv_size": "21470642176",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "name": "ceph_lv0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "tags": {
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.cluster_name": "ceph",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.crush_device_class": "",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.encrypted": "0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.osd_id": "0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.type": "block",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.vdo": "0",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:                "ceph.with_tpm": "0"
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            },
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "type": "block",
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:            "vg_name": "ceph_vg0"
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:        }
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]:    ]
Dec  7 05:01:53 np0005549474 relaxed_knuth[254449]: }
Dec  7 05:01:53 np0005549474 systemd[1]: libpod-132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b.scope: Deactivated successfully.
Dec  7 05:01:53 np0005549474 podman[254408]: 2025-12-07 10:01:53.51567135 +0000 UTC m=+1.671377284 container died 132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 05:01:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1f5dedb5d8e517a454e948c4557626b5c3e883022ad5550f7a8fb8e2ebc99a6e-merged.mount: Deactivated successfully.
Dec  7 05:01:53 np0005549474 podman[254408]: 2025-12-07 10:01:53.554218434 +0000 UTC m=+1.709924368 container remove 132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:01:53 np0005549474 systemd[1]: libpod-conmon-132906a8281974cbad204d707335e63aba634730af6905c93a772402fa7c048b.scope: Deactivated successfully.
Dec  7 05:01:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:01:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:53 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc001460 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:01:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:54.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.165284543 +0000 UTC m=+0.053909142 container create ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:01:54 np0005549474 systemd[1]: Started libpod-conmon-ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257.scope.
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.140181413 +0000 UTC m=+0.028806032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:01:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:54 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfb40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:54 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.263230975 +0000 UTC m=+0.151855594 container init ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_borg, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.269118064 +0000 UTC m=+0.157742663 container start ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_borg, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.272700591 +0000 UTC m=+0.161325210 container attach ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_borg, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:01:54 np0005549474 dreamy_borg[254784]: 167 167
Dec  7 05:01:54 np0005549474 systemd[1]: libpod-ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257.scope: Deactivated successfully.
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.275039175 +0000 UTC m=+0.163663824 container died ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 05:01:54 np0005549474 systemd[1]: var-lib-containers-storage-overlay-fda92d2df5792db644a0db48b4a5ea7137c9c68fa39a5e350eb6054fb2610d29-merged.mount: Deactivated successfully.
Dec  7 05:01:54 np0005549474 podman[254768]: 2025-12-07 10:01:54.315282784 +0000 UTC m=+0.203907383 container remove ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:01:54 np0005549474 systemd[1]: libpod-conmon-ee945c7f18379151e4887e37abde4ced847acbe101da30119d614499e9a83257.scope: Deactivated successfully.
Dec  7 05:01:54 np0005549474 python3[254766]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  7 05:01:54 np0005549474 podman[254826]: 2025-12-07 10:01:54.439325104 +0000 UTC m=+0.018831541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:01:54 np0005549474 podman[254826]: 2025-12-07 10:01:54.57248205 +0000 UTC m=+0.151988467 container create c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:01:54 np0005549474 systemd[1]: Started libpod-conmon-c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9.scope.
Dec  7 05:01:54 np0005549474 podman[254853]: 2025-12-07 10:01:54.622565036 +0000 UTC m=+0.159903981 container create a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:01:54 np0005549474 podman[254853]: 2025-12-07 10:01:54.586258323 +0000 UTC m=+0.123597378 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec  7 05:01:54 np0005549474 python3[254766]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 kolla_start
Dec  7 05:01:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:54.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:54 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15488b5b31988e08aa3f292ac8f244c3f3b1df69506bc7de461a966240b9518/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15488b5b31988e08aa3f292ac8f244c3f3b1df69506bc7de461a966240b9518/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15488b5b31988e08aa3f292ac8f244c3f3b1df69506bc7de461a966240b9518/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15488b5b31988e08aa3f292ac8f244c3f3b1df69506bc7de461a966240b9518/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:54 np0005549474 podman[254826]: 2025-12-07 10:01:54.663747292 +0000 UTC m=+0.243253729 container init c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:01:54 np0005549474 podman[254826]: 2025-12-07 10:01:54.669463886 +0000 UTC m=+0.248970303 container start c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 05:01:54 np0005549474 podman[254826]: 2025-12-07 10:01:54.673085674 +0000 UTC m=+0.252592201 container attach c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:01:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:01:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:55 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:55 np0005549474 lvm[255092]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:01:55 np0005549474 lvm[255092]: VG ceph_vg0 finished
Dec  7 05:01:55 np0005549474 admiring_banzai[254868]: {}
Dec  7 05:01:55 np0005549474 systemd[1]: libpod-c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9.scope: Deactivated successfully.
Dec  7 05:01:55 np0005549474 podman[254826]: 2025-12-07 10:01:55.39114356 +0000 UTC m=+0.970650007 container died c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banzai, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:01:55 np0005549474 systemd[1]: libpod-c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9.scope: Consumed 1.006s CPU time.
Dec  7 05:01:55 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a15488b5b31988e08aa3f292ac8f244c3f3b1df69506bc7de461a966240b9518-merged.mount: Deactivated successfully.
Dec  7 05:01:55 np0005549474 podman[254826]: 2025-12-07 10:01:55.443692313 +0000 UTC m=+1.023198730 container remove c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_banzai, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 05:01:55 np0005549474 systemd[1]: libpod-conmon-c53a82134bbcd4be795738c67b4af29048b556efeec7d546ddb4c49ea17be7a9.scope: Deactivated successfully.
Dec  7 05:01:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:01:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:01:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:55 np0005549474 python3.9[255124]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:01:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:55 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff400ab70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:01:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:56.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:56 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc001460 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:56 np0005549474 python3.9[255315]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:01:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:56.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:56 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:01:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:56 : epoch 69355004 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:01:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:01:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:01:57.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:01:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:57 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfb4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:01:57 np0005549474 python3.9[255467]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101716.7832282-4153-210105761537972/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 05:01:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:01:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:01:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:57 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfd0003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:57 np0005549474 python3.9[255544]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 05:01:57 np0005549474 systemd[1]: Reloading.
Dec  7 05:01:58 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:01:58 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:01:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:58 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbff400ab70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:01:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:01:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:01:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:58.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:01:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 05:01:58 np0005549474 python3.9[255655]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 05:01:58 np0005549474 systemd[1]: Reloading.
Dec  7 05:01:59 np0005549474 kernel: ganesha.nfsd[253554]: segfault at 50 ip 00007fc09cd2932e sp 00007fc051ffa210 error 4 in libntirpc.so.5.8[7fc09cd0e000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  7 05:01:59 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 05:01:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[232174]: 07/12/2025 10:01:59 : epoch 69355004 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fbfbc001460 fd 39 proxy ignored for local
Dec  7 05:01:59 np0005549474 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 05:01:59 np0005549474 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 05:01:59 np0005549474 systemd[1]: Started Process Core Dump (PID 255677/UID 0).
Dec  7 05:01:59 np0005549474 systemd[1]: Starting nova_compute container...
Dec  7 05:01:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:01:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  7 05:01:59 np0005549474 podman[255699]: 2025-12-07 10:01:59.476088775 +0000 UTC m=+0.099494915 container init a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, io.buildah.version=1.41.3)
Dec  7 05:01:59 np0005549474 podman[255699]: 2025-12-07 10:01:59.483557458 +0000 UTC m=+0.106963598 container start a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  7 05:01:59 np0005549474 podman[255699]: nova_compute
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + sudo -E kolla_set_configs
Dec  7 05:01:59 np0005549474 systemd[1]: Started nova_compute container.
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Validating config file
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying service configuration files
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Deleting /etc/ceph
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Creating directory /etc/ceph
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/ceph
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Writing out command to execute
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:01:59 np0005549474 nova_compute[255714]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  7 05:01:59 np0005549474 nova_compute[255714]: ++ cat /run_command
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + CMD=nova-compute
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + ARGS=
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + sudo kolla_copy_cacerts
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + [[ ! -n '' ]]
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + . kolla_extend_start
Dec  7 05:01:59 np0005549474 nova_compute[255714]: Running command: 'nova-compute'
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + echo 'Running command: '\''nova-compute'\'''
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + umask 0022
Dec  7 05:01:59 np0005549474 nova_compute[255714]: + exec nova-compute
Dec  7 05:01:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:59] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 05:01:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:01:59] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 05:02:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:00.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:02:01 np0005549474 python3.9[255901]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:02:01 np0005549474 systemd-coredump[255697]: Process 232181 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 62:#012#0  0x00007fc09cd2932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 05:02:02 np0005549474 systemd[1]: systemd-coredump@7-255677-0.service: Deactivated successfully.
Dec  7 05:02:02 np0005549474 systemd[1]: systemd-coredump@7-255677-0.service: Consumed 1.223s CPU time.
Dec  7 05:02:02 np0005549474 podman[255932]: 2025-12-07 10:02:02.08092387 +0000 UTC m=+0.030016317 container died 42f24b76c048cec8a97922aac30af331b19822c877a7113307baef1d461d712c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 05:02:02 np0005549474 systemd[1]: var-lib-containers-storage-overlay-dc5e2ddb05301261c82d9e0e55c8cd87175e6126422de9b58207fe1e9b739f05-merged.mount: Deactivated successfully.
Dec  7 05:02:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:02:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:02.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:02 np0005549474 podman[255932]: 2025-12-07 10:02:02.121417364 +0000 UTC m=+0.070509791 container remove 42f24b76c048cec8a97922aac30af331b19822c877a7113307baef1d461d712c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 05:02:02 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 05:02:02 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 05:02:02 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.708s CPU time.
Dec  7 05:02:02 np0005549474 python3.9[256097]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:02:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:02.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100202 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:02:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.071 255718 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.071 255718 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.071 255718 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.072 255718 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.208 255718 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.232 255718 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.233 255718 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  7 05:02:03 np0005549474 python3.9[256253]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.687 255718 INFO nova.virt.driver [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.812 255718 INFO nova.compute.provider_config [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.850 255718 DEBUG oslo_concurrency.lockutils [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.851 255718 DEBUG oslo_concurrency.lockutils [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.851 255718 DEBUG oslo_concurrency.lockutils [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.852 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.852 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.852 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.852 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.852 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.853 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.853 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.853 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.853 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.853 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.854 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.854 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.854 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.854 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.854 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.854 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.855 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.855 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.855 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.855 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.855 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.856 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.856 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.856 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.856 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.856 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.857 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.857 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.857 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.857 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.857 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.858 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.858 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.858 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.858 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.858 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.858 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.859 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.859 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.859 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.859 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.860 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.860 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.860 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.860 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.861 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.861 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.861 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.861 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.861 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.861 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.862 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.862 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.862 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.862 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.862 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.863 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.863 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.863 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.863 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.863 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.864 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.864 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.864 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.864 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.864 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.864 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.865 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.865 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.865 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.865 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.865 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.866 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.866 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.866 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.866 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.866 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.866 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.867 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.867 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.867 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.867 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.867 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.868 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.868 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.868 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.868 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.868 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.869 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.869 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.869 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.869 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.869 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.869 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.870 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.870 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.870 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.870 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.870 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.871 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.871 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.871 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.871 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.871 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.871 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.872 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.872 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.872 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.872 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.872 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.873 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.873 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.873 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.873 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.873 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.874 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.874 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.874 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.874 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.874 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.874 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.875 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.875 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.875 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.875 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.875 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.876 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.876 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.876 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.876 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.876 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.877 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.877 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.877 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.877 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.877 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.877 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.878 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.878 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.878 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.878 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.878 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.879 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.879 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.879 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.879 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.879 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.879 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.880 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.880 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.880 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.880 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.881 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.881 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.881 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.881 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.881 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.882 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.882 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.882 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.882 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.882 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.882 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.883 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.883 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.883 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.883 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.883 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.884 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.884 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.884 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.884 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.884 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.885 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.885 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.885 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.885 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.885 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.886 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.886 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.886 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.886 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.886 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.886 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.887 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.887 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.887 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.887 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.887 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.888 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.888 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.888 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.888 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.888 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.888 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.889 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.889 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.889 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.889 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.889 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.890 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.890 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.890 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.890 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.890 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.891 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.891 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.891 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.891 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.891 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.891 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.892 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.892 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.892 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.892 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.892 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.893 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.893 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.893 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.893 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.893 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.893 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.894 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.894 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.894 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.894 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.894 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.895 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.895 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.895 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.895 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.895 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.896 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.896 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.896 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.896 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.896 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.896 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.897 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.897 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.897 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.897 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.897 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.898 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.898 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.898 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.898 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.898 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.898 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.899 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.899 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.899 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.899 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.899 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.900 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.900 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.900 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.900 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.900 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.900 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.901 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.901 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.901 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.901 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.901 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.902 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.902 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.902 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.902 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.902 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.903 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.903 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.903 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.903 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.903 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.904 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.904 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.904 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.904 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.904 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.905 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.905 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.905 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.905 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.905 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.905 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.906 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.906 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.906 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.906 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.906 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.907 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.907 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.907 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.907 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.907 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.908 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.908 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.908 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.908 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.908 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.909 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.909 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.909 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.909 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.909 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.910 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.910 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.910 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.910 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.910 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.910 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.911 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.911 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.911 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.911 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.911 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.912 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.913 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.913 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.913 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.913 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.913 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.913 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.914 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.914 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.914 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.914 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.914 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.914 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.915 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.915 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.915 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.915 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.915 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.915 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.916 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.917 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.917 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.917 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.917 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.917 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.918 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.919 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.920 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.920 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.920 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.920 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.920 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.920 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.921 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.922 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.923 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.923 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.923 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.923 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.923 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.923 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.924 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.925 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.925 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.925 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.925 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.925 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.925 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.926 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.926 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.926 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.926 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.926 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.926 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.927 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.927 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.927 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.927 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.927 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.927 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.928 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.928 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.928 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.928 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.928 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.928 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.929 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.930 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.931 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.932 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.933 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.934 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.934 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.934 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.934 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.934 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.934 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.935 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.935 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.935 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.935 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.935 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.935 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.936 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.937 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.938 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.938 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.938 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.938 255718 WARNING oslo_config.cfg [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  7 05:02:03 np0005549474 nova_compute[255714]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  7 05:02:03 np0005549474 nova_compute[255714]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  7 05:02:03 np0005549474 nova_compute[255714]: and ``live_migration_inbound_addr`` respectively.
Dec  7 05:02:03 np0005549474 nova_compute[255714]: ).  Its value may be silently ignored in the future.#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.938 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.939 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.939 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.939 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.939 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.939 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.939 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.940 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rbd_secret_uuid        = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.941 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.942 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.943 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.943 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.943 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.943 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.943 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.943 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.944 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.945 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.946 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.947 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.948 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.949 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.949 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.949 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.949 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.949 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.949 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.950 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.951 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.951 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.951 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.951 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.951 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.951 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.952 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.953 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.954 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.955 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.956 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.957 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.958 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.959 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.960 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.961 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.962 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.963 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.964 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.964 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.964 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.964 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.964 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.964 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.965 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.966 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.967 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.967 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.967 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.967 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.967 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.968 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.969 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.970 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.971 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.972 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.973 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.974 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.974 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.974 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.974 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.974 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.974 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.975 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.975 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.975 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.975 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.975 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.975 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.976 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.977 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.978 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.978 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.978 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.978 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.978 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.978 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.979 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.980 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.981 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.982 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.983 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.984 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.985 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.986 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.987 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.987 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.987 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.987 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.987 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.987 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.988 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.989 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.990 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.991 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.992 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.993 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.994 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.994 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.994 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.994 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.994 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.994 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.995 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.996 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.997 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.998 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:03 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:03.999 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.000 255718 DEBUG oslo_service.service [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.001 255718 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.034 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.035 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.035 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.036 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  7 05:02:04 np0005549474 systemd[1]: Starting libvirt QEMU daemon...
Dec  7 05:02:04 np0005549474 systemd[1]: Started libvirt QEMU daemon.
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.111 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4f8bebfb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.114 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4f8bebfb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.115 255718 INFO nova.virt.libvirt.driver [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  7 05:02:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:02:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.153 255718 WARNING nova.virt.libvirt.driver [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.153 255718 DEBUG nova.virt.libvirt.volume.mount [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  7 05:02:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:04.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.886 255718 INFO nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Libvirt host capabilities <capabilities>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <host>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <uuid>21fd7ebb-512a-4f77-9836-e8bca79e9734</uuid>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <cpu>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <arch>x86_64</arch>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model>EPYC-Rome-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <vendor>AMD</vendor>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <microcode version='16777317'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <signature family='23' model='49' stepping='0'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='x2apic'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='tsc-deadline'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='osxsave'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='hypervisor'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='tsc_adjust'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='spec-ctrl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='stibp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='arch-capabilities'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='cmp_legacy'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='topoext'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='virt-ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='lbrv'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='tsc-scale'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='vmcb-clean'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='pause-filter'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='pfthreshold'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='svme-addr-chk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='rdctl-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='skip-l1dfl-vmentry'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='mds-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature name='pschange-mc-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <pages unit='KiB' size='4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <pages unit='KiB' size='2048'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <pages unit='KiB' size='1048576'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </cpu>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <power_management>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <suspend_mem/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </power_management>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <iommu support='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <migration_features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <live/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <uri_transports>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <uri_transport>tcp</uri_transport>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <uri_transport>rdma</uri_transport>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </uri_transports>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </migration_features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <topology>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <cells num='1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <cell id='0'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          <memory unit='KiB'>7864320</memory>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          <pages unit='KiB' size='2048'>0</pages>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          <distances>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <sibling id='0' value='10'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          </distances>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          <cpus num='8'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:          </cpus>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        </cell>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </cells>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </topology>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <cache>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </cache>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <secmodel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model>selinux</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <doi>0</doi>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </secmodel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <secmodel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model>dac</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <doi>0</doi>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </secmodel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </host>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <guest>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <os_type>hvm</os_type>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <arch name='i686'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <wordsize>32</wordsize>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <domain type='qemu'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <domain type='kvm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </arch>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <pae/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <nonpae/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <acpi default='on' toggle='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <apic default='on' toggle='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <cpuselection/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <deviceboot/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <disksnapshot default='on' toggle='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <externalSnapshot/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </guest>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <guest>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <os_type>hvm</os_type>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <arch name='x86_64'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <wordsize>64</wordsize>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <domain type='qemu'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <domain type='kvm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </arch>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <acpi default='on' toggle='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <apic default='on' toggle='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <cpuselection/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <deviceboot/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <disksnapshot default='on' toggle='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <externalSnapshot/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </guest>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 
Dec  7 05:02:04 np0005549474 nova_compute[255714]: </capabilities>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: #033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.892 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.911 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  7 05:02:04 np0005549474 nova_compute[255714]: <domainCapabilities>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <path>/usr/libexec/qemu-kvm</path>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <domain>kvm</domain>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <arch>i686</arch>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <vcpu max='4096'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <iothreads supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <os supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <enum name='firmware'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <loader supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>rom</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>pflash</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='readonly'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>yes</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='secure'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </loader>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </os>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <cpu>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='host-passthrough' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='hostPassthroughMigratable'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='maximum' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='maximumMigratable'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='host-model' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <vendor>AMD</vendor>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='x2apic'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-deadline'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='hypervisor'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc_adjust'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='spec-ctrl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='stibp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='cmp_legacy'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='overflow-recov'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='succor'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='ibrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='amd-ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='virt-ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='lbrv'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-scale'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='vmcb-clean'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='flushbyasid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='pause-filter'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='pfthreshold'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='svme-addr-chk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='disable' name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='custom' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v5'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Dhyana-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10-128'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10-256'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10-512'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-noTSX'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v5'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v6'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v7'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='SierraForest'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='SierraForest-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v5'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Snowridge'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='athlon'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='athlon-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='core2duo'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='core2duo-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='coreduo'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='coreduo-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='n270'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='n270-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='phenom'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='phenom-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </cpu>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <memoryBacking supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <enum name='sourceType'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <value>file</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <value>anonymous</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <value>memfd</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </memoryBacking>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <devices>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <disk supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='diskDevice'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>disk</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>cdrom</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>floppy</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>lun</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>fdc</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>sata</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </disk>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <graphics supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>vnc</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>egl-headless</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </graphics>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <video supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='modelType'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>vga</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>cirrus</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>none</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>bochs</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>ramfb</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </video>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <hostdev supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='mode'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>subsystem</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='startupPolicy'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>mandatory</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>requisite</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>optional</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='subsysType'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>pci</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='capsType'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='pciBackend'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </hostdev>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <rng supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>random</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>egd</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </rng>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <filesystem supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='driverType'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>path</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>handle</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>virtiofs</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </filesystem>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <tpm supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>tpm-tis</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>tpm-crb</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>emulator</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>external</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='backendVersion'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>2.0</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </tpm>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <redirdev supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </redirdev>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <channel supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </channel>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <crypto supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='model'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>qemu</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </crypto>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <interface supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='backendType'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>passt</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </interface>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <panic supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>isa</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>hyperv</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </panic>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <console supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>null</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>vc</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>dev</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>file</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>pipe</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>stdio</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>udp</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>tcp</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>qemu-vdagent</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </console>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </devices>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <gic supported='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <vmcoreinfo supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <genid supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <backingStoreInput supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <backup supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <async-teardown supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <ps2 supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <sev supported='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <sgx supported='no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <hyperv supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='features'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>relaxed</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>vapic</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>spinlocks</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>vpindex</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>runtime</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>synic</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>stimer</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>reset</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>vendor_id</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>frequencies</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>reenlightenment</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>tlbflush</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>ipi</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>avic</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>emsr_bitmap</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>xmm_input</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <defaults>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <spinlocks>4095</spinlocks>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <stimer_direct>on</stimer_direct>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <tlbflush_direct>on</tlbflush_direct>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <tlbflush_extended>on</tlbflush_extended>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </defaults>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </hyperv>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <launchSecurity supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='sectype'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>tdx</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </launchSecurity>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </features>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: </domainCapabilities>
Dec  7 05:02:04 np0005549474 nova_compute[255714]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  7 05:02:04 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.918 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  7 05:02:04 np0005549474 nova_compute[255714]: <domainCapabilities>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <path>/usr/libexec/qemu-kvm</path>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <domain>kvm</domain>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <arch>i686</arch>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <vcpu max='240'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <iothreads supported='yes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <os supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <enum name='firmware'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <loader supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>rom</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>pflash</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='readonly'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>yes</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='secure'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </loader>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  </os>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:  <cpu>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='host-passthrough' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='hostPassthroughMigratable'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='maximum' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <enum name='maximumMigratable'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='host-model' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <vendor>AMD</vendor>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='x2apic'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-deadline'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='hypervisor'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc_adjust'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='spec-ctrl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='stibp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='cmp_legacy'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='overflow-recov'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='succor'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='ibrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='amd-ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='virt-ssbd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='lbrv'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-scale'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='vmcb-clean'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='flushbyasid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='pause-filter'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='pfthreshold'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='svme-addr-chk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <feature policy='disable' name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:    <mode name='custom' supported='yes'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v5'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='Dhyana-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v3'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v4'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v1'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v2'>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10-128'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10-256'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx10-512'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:04 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v6'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v7'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SierraForest'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SierraForest-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='athlon'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='athlon-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='core2duo'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='core2duo-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='coreduo'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='coreduo-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='n270'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='n270-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='phenom'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='phenom-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </cpu>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <memoryBacking supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <enum name='sourceType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>file</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>anonymous</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>memfd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </memoryBacking>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <devices>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <disk supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='diskDevice'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>disk</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>cdrom</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>floppy</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>lun</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ide</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>fdc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>sata</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </disk>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <graphics supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vnc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>egl-headless</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </graphics>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <video supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='modelType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vga</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>cirrus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>none</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>bochs</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ramfb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </video>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <hostdev supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='mode'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>subsystem</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='startupPolicy'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>mandatory</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>requisite</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>optional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='subsysType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pci</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='capsType'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='pciBackend'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </hostdev>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <rng supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>random</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>egd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </rng>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <filesystem supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='driverType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>path</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>handle</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtiofs</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </filesystem>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <tpm supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tpm-tis</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tpm-crb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>emulator</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>external</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendVersion'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>2.0</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </tpm>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <redirdev supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </redirdev>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <channel supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </channel>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <crypto supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>qemu</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </crypto>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <interface supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>passt</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </interface>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <panic supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>isa</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>hyperv</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </panic>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <console supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>null</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dev</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>file</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pipe</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>stdio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>udp</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tcp</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>qemu-vdagent</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </console>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </devices>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <features>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <gic supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <vmcoreinfo supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <genid supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <backingStoreInput supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <backup supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <async-teardown supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <ps2 supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <sev supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <sgx supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <hyperv supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='features'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>relaxed</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vapic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>spinlocks</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vpindex</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>runtime</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>synic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>stimer</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>reset</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vendor_id</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>frequencies</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>reenlightenment</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tlbflush</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ipi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>avic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>emsr_bitmap</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>xmm_input</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <defaults>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <spinlocks>4095</spinlocks>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <stimer_direct>on</stimer_direct>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <tlbflush_direct>on</tlbflush_direct>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <tlbflush_extended>on</tlbflush_extended>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </defaults>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </hyperv>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <launchSecurity supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='sectype'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tdx</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </launchSecurity>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </features>
Dec  7 05:02:05 np0005549474 nova_compute[255714]: </domainCapabilities>
Dec  7 05:02:05 np0005549474 nova_compute[255714]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.945 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:04.949 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  7 05:02:05 np0005549474 nova_compute[255714]: <domainCapabilities>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <path>/usr/libexec/qemu-kvm</path>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <domain>kvm</domain>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <arch>x86_64</arch>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <vcpu max='4096'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <iothreads supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <os supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <enum name='firmware'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>efi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <loader supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>rom</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pflash</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='readonly'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>yes</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='secure'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>yes</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </loader>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </os>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <cpu>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='host-passthrough' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='hostPassthroughMigratable'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='maximum' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='maximumMigratable'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='host-model' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <vendor>AMD</vendor>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='x2apic'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-deadline'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='hypervisor'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc_adjust'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='spec-ctrl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='stibp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='ssbd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='cmp_legacy'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='overflow-recov'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='succor'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='ibrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='amd-ssbd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='virt-ssbd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='lbrv'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-scale'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='vmcb-clean'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='flushbyasid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='pause-filter'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='pfthreshold'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='svme-addr-chk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='disable' name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='custom' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Dhyana-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10-128'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10-256'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10-512'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v6'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v7'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SierraForest'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SierraForest-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='athlon'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='athlon-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='core2duo'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='core2duo-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='coreduo'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='coreduo-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='n270'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='n270-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='phenom'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='phenom-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </cpu>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <memoryBacking supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <enum name='sourceType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>file</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>anonymous</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>memfd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </memoryBacking>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <devices>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <disk supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='diskDevice'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>disk</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>cdrom</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>floppy</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>lun</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>fdc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>sata</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </disk>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <graphics supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vnc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>egl-headless</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </graphics>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <video supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='modelType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vga</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>cirrus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>none</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>bochs</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ramfb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </video>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <hostdev supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='mode'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>subsystem</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='startupPolicy'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>mandatory</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>requisite</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>optional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='subsysType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pci</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='capsType'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='pciBackend'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </hostdev>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <rng supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>random</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>egd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </rng>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <filesystem supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='driverType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>path</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>handle</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtiofs</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </filesystem>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <tpm supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tpm-tis</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tpm-crb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>emulator</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>external</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendVersion'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>2.0</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </tpm>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <redirdev supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </redirdev>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <channel supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </channel>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <crypto supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>qemu</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </crypto>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <interface supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>passt</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </interface>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <panic supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>isa</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>hyperv</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </panic>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <console supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>null</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dev</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>file</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pipe</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>stdio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>udp</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tcp</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>qemu-vdagent</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </console>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </devices>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <features>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <gic supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <vmcoreinfo supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <genid supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <backingStoreInput supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <backup supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <async-teardown supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <ps2 supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <sev supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <sgx supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <hyperv supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='features'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>relaxed</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vapic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>spinlocks</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vpindex</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>runtime</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>synic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>stimer</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>reset</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vendor_id</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>frequencies</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>reenlightenment</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tlbflush</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ipi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>avic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>emsr_bitmap</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>xmm_input</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <defaults>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <spinlocks>4095</spinlocks>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <stimer_direct>on</stimer_direct>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <tlbflush_direct>on</tlbflush_direct>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <tlbflush_extended>on</tlbflush_extended>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </defaults>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </hyperv>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <launchSecurity supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='sectype'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tdx</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </launchSecurity>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </features>
Dec  7 05:02:05 np0005549474 nova_compute[255714]: </domainCapabilities>
Dec  7 05:02:05 np0005549474 nova_compute[255714]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.013 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  7 05:02:05 np0005549474 nova_compute[255714]: <domainCapabilities>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <path>/usr/libexec/qemu-kvm</path>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <domain>kvm</domain>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <arch>x86_64</arch>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <vcpu max='240'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <iothreads supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <os supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <enum name='firmware'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <loader supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>rom</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pflash</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='readonly'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>yes</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='secure'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>no</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </loader>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </os>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <cpu>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='host-passthrough' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='hostPassthroughMigratable'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='maximum' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='maximumMigratable'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>on</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>off</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='host-model' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <vendor>AMD</vendor>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='x2apic'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-deadline'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='hypervisor'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc_adjust'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='spec-ctrl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='stibp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='ssbd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='cmp_legacy'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='overflow-recov'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='succor'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='ibrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='amd-ssbd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='virt-ssbd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='lbrv'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='tsc-scale'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='vmcb-clean'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='flushbyasid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='pause-filter'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='pfthreshold'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='svme-addr-chk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <feature policy='disable' name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <mode name='custom' supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Broadwell-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cascadelake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Cooperlake-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Denverton-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Dhyana-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Genoa-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='auto-ibrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Milan-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amd-psfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='stibp-always-on'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-Rome-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='EPYC-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='GraniteRapids-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10-128'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10-256'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx10-512'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='prefetchiti'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Haswell-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-noTSX'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v6'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Icelake-Server-v7'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='IvyBridge-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='KnightsMill-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512er'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512pf'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G4-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Opteron_G5-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fma4'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tbm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xop'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SapphireRapids-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='amx-tile'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-bf16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-fp16'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bitalg'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrc'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fzrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='la57'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='taa-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xfd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SierraForest'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='SierraForest-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ifma'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cmpccxadd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fbsdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='fsrs'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ibrs-all'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mcdt-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pbrsb-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='psdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='serialize'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vaes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Client-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='hle'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='rtm'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Skylake-Server-v5'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512bw'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512cd'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512dq'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512f'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='avx512vl'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='invpcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pcid'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='pku'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  7 05:02:05 np0005549474 python3.9[256467]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='mpx'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v2'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v3'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='core-capability'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='split-lock-detect'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='Snowridge-v4'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='cldemote'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='erms'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='gfni'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdir64b'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='movdiri'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='xsaves'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='athlon'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='athlon-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='core2duo'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='core2duo-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='coreduo'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='coreduo-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='n270'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='n270-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='ss'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='phenom'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <blockers model='phenom-v1'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnow'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <feature name='3dnowext'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </blockers>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </mode>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </cpu>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <memoryBacking supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <enum name='sourceType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>file</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>anonymous</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <value>memfd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </memoryBacking>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <devices>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <disk supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='diskDevice'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>disk</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>cdrom</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>floppy</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>lun</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ide</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>fdc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>sata</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </disk>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <graphics supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vnc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>egl-headless</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </graphics>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <video supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='modelType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vga</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>cirrus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>none</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>bochs</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ramfb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </video>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <hostdev supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='mode'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>subsystem</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='startupPolicy'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>mandatory</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>requisite</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>optional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='subsysType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pci</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>scsi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='capsType'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='pciBackend'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </hostdev>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <rng supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtio-non-transitional</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>random</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>egd</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </rng>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <filesystem supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='driverType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>path</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>handle</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>virtiofs</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </filesystem>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <tpm supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tpm-tis</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tpm-crb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>emulator</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>external</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendVersion'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>2.0</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </tpm>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <redirdev supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='bus'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>usb</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </redirdev>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <channel supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </channel>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <crypto supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>qemu</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendModel'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>builtin</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </crypto>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <interface supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='backendType'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>default</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>passt</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </interface>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <panic supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='model'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>isa</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>hyperv</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </panic>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <console supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='type'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>null</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vc</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pty</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dev</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>file</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>pipe</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>stdio</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>udp</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tcp</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>unix</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>qemu-vdagent</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>dbus</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </console>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </devices>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  <features>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <gic supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <vmcoreinfo supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <genid supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <backingStoreInput supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <backup supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <async-teardown supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <ps2 supported='yes'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <sev supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <sgx supported='no'/>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <hyperv supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='features'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>relaxed</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vapic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>spinlocks</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vpindex</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>runtime</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>synic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>stimer</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>reset</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>vendor_id</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>frequencies</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>reenlightenment</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tlbflush</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>ipi</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>avic</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>emsr_bitmap</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>xmm_input</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <defaults>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <spinlocks>4095</spinlocks>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <stimer_direct>on</stimer_direct>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <tlbflush_direct>on</tlbflush_direct>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <tlbflush_extended>on</tlbflush_extended>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </defaults>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </hyperv>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    <launchSecurity supported='yes'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      <enum name='sectype'>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:        <value>tdx</value>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:      </enum>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:    </launchSecurity>
Dec  7 05:02:05 np0005549474 nova_compute[255714]:  </features>
Dec  7 05:02:05 np0005549474 nova_compute[255714]: </domainCapabilities>
Dec  7 05:02:05 np0005549474 nova_compute[255714]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.079 255718 DEBUG nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.079 255718 INFO nova.virt.libvirt.host [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Secure Boot support detected#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.081 255718 INFO nova.virt.libvirt.driver [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.081 255718 INFO nova.virt.libvirt.driver [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.089 255718 DEBUG nova.virt.libvirt.driver [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.135 255718 INFO nova.virt.node [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Determined node identity 7e48a19e-1e29-4c67-8ffa-7daf855825bb from /var/lib/nova/compute_id#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.152 255718 WARNING nova.compute.manager [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Compute nodes ['7e48a19e-1e29-4c67-8ffa-7daf855825bb'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.203 255718 INFO nova.compute.manager [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.240 255718 WARNING nova.compute.manager [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.240 255718 DEBUG oslo_concurrency.lockutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.240 255718 DEBUG oslo_concurrency.lockutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.240 255718 DEBUG oslo_concurrency.lockutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.241 255718 DEBUG nova.compute.resource_tracker [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.241 255718 DEBUG oslo_concurrency.processutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:02:05 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:02:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:02:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797389742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.659 255718 DEBUG oslo_concurrency.processutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:02:05 np0005549474 systemd[1]: Starting libvirt nodedev daemon...
Dec  7 05:02:05 np0005549474 systemd[1]: Started libvirt nodedev daemon.
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.937 255718 WARNING nova.virt.libvirt.driver [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.938 255718 DEBUG nova.compute.resource_tracker [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4950MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.939 255718 DEBUG oslo_concurrency.lockutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.939 255718 DEBUG oslo_concurrency.lockutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.956 255718 WARNING nova.compute.resource_tracker [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] No compute node record for compute-0.ctlplane.example.com:7e48a19e-1e29-4c67-8ffa-7daf855825bb: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 7e48a19e-1e29-4c67-8ffa-7daf855825bb could not be found.#033[00m
Dec  7 05:02:05 np0005549474 nova_compute[255714]: 2025-12-07 10:02:05.974 255718 INFO nova.compute.resource_tracker [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 7e48a19e-1e29-4c67-8ffa-7daf855825bb#033[00m
Dec  7 05:02:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:06.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:06 np0005549474 python3.9[256691]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 05:02:06 np0005549474 systemd[1]: Stopping nova_compute container...
Dec  7 05:02:06 np0005549474 nova_compute[255714]: 2025-12-07 10:02:06.401 255718 DEBUG oslo_concurrency.lockutils [None req-764b9742-3a91-464d-9b3f-ce9d1f6012c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:02:06 np0005549474 nova_compute[255714]: 2025-12-07 10:02:06.404 255718 DEBUG oslo_concurrency.lockutils [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:02:06 np0005549474 nova_compute[255714]: 2025-12-07 10:02:06.404 255718 DEBUG oslo_concurrency.lockutils [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:02:06 np0005549474 nova_compute[255714]: 2025-12-07 10:02:06.405 255718 DEBUG oslo_concurrency.lockutils [None req-90337248-9643-4e5e-8a57-7cc358da3a9d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:02:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:06.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:06 np0005549474 virtqemud[256299]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  7 05:02:06 np0005549474 systemd[1]: libpod-a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4.scope: Deactivated successfully.
Dec  7 05:02:06 np0005549474 virtqemud[256299]: hostname: compute-0
Dec  7 05:02:06 np0005549474 virtqemud[256299]: End of file while reading data: Input/output error
Dec  7 05:02:06 np0005549474 systemd[1]: libpod-a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4.scope: Consumed 3.748s CPU time.
Dec  7 05:02:06 np0005549474 podman[256695]: 2025-12-07 10:02:06.842406017 +0000 UTC m=+0.498627381 container died a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute)
Dec  7 05:02:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:02:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4-userdata-shm.mount: Deactivated successfully.
Dec  7 05:02:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22-merged.mount: Deactivated successfully.
Dec  7 05:02:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:02:07.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:02:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100207 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:02:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:02:08 np0005549474 podman[256695]: 2025-12-07 10:02:08.115419659 +0000 UTC m=+1.771641043 container cleanup a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:02:08 np0005549474 podman[256695]: nova_compute
Dec  7 05:02:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:08 np0005549474 podman[256726]: nova_compute
Dec  7 05:02:08 np0005549474 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  7 05:02:08 np0005549474 systemd[1]: Stopped nova_compute container.
Dec  7 05:02:08 np0005549474 systemd[1]: Starting nova_compute container...
Dec  7 05:02:08 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:02:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38537bc085c213dc18659b075778b5c7b73c80dcec329ff5cf5fcdf265cdc22/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:08 np0005549474 podman[256738]: 2025-12-07 10:02:08.508953228 +0000 UTC m=+0.287928423 container init a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  7 05:02:08 np0005549474 podman[256738]: 2025-12-07 10:02:08.515699001 +0000 UTC m=+0.294674106 container start a0dffcb99129e8c7b9453d1b93f5b368c03204e63ed1d4ac146261014e9364d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + sudo -E kolla_set_configs
Dec  7 05:02:08 np0005549474 podman[256738]: nova_compute
Dec  7 05:02:08 np0005549474 systemd[1]: Started nova_compute container.
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Validating config file
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying service configuration files
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /etc/ceph
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Creating directory /etc/ceph
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/ceph
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Writing out command to execute
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:02:08 np0005549474 nova_compute[256753]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  7 05:02:08 np0005549474 nova_compute[256753]: ++ cat /run_command
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + CMD=nova-compute
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + ARGS=
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + sudo kolla_copy_cacerts
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + [[ ! -n '' ]]
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + . kolla_extend_start
Dec  7 05:02:08 np0005549474 nova_compute[256753]: Running command: 'nova-compute'
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + echo 'Running command: '\''nova-compute'\'''
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + umask 0022
Dec  7 05:02:08 np0005549474 nova_compute[256753]: + exec nova-compute
Dec  7 05:02:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:08.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Dec  7 05:02:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:09] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 05:02:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:09] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Dec  7 05:02:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:10 np0005549474 podman[256792]: 2025-12-07 10:02:10.249209257 +0000 UTC m=+0.068408754 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:02:10 np0005549474 podman[256793]: 2025-12-07 10:02:10.304991126 +0000 UTC m=+0.113742459 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.331848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730331905, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 972, "num_deletes": 251, "total_data_size": 1725162, "memory_usage": 1742336, "flush_reason": "Manual Compaction"}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730351948, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1690193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19141, "largest_seqno": 20112, "table_properties": {"data_size": 1685365, "index_size": 2416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10438, "raw_average_key_size": 19, "raw_value_size": 1675758, "raw_average_value_size": 3161, "num_data_blocks": 107, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101642, "oldest_key_time": 1765101642, "file_creation_time": 1765101730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 20146 microseconds, and 4153 cpu microseconds.
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.352000) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1690193 bytes OK
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.352022) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.353534) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.353552) EVENT_LOG_v1 {"time_micros": 1765101730353547, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.353570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1720615, prev total WAL file size 1720615, number of live WAL files 2.
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.354423) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1650KB)], [41(13MB)]
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730354543, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15766733, "oldest_snapshot_seqno": -1}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5103 keys, 13582660 bytes, temperature: kUnknown
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730512378, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13582660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13547193, "index_size": 21597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 130122, "raw_average_key_size": 25, "raw_value_size": 13453331, "raw_average_value_size": 2636, "num_data_blocks": 886, "num_entries": 5103, "num_filter_entries": 5103, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765101730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.512785) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13582660 bytes
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.514962) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.8 rd, 86.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 13.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(17.4) write-amplify(8.0) OK, records in: 5621, records dropped: 518 output_compression: NoCompression
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.514995) EVENT_LOG_v1 {"time_micros": 1765101730514980, "job": 20, "event": "compaction_finished", "compaction_time_micros": 157940, "compaction_time_cpu_micros": 39382, "output_level": 6, "num_output_files": 1, "total_output_size": 13582660, "num_input_records": 5621, "num_output_records": 5103, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730516098, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730521164, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.354191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.521406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.521415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.521417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.521419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:02:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:02:10.521422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.583 256757 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.584 256757 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.584 256757 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.585 256757 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  7 05:02:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.721 256757 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.743 256757 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:02:10 np0005549474 nova_compute[256753]: 2025-12-07 10:02:10.744 256757 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  7 05:02:10 np0005549474 python3.9[256965]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  7 05:02:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 05:02:11 np0005549474 systemd[1]: Started libpod-conmon-0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae.scope.
Dec  7 05:02:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:02:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521c92e2d7b1aadf5b5e810c555f1ec8cf2e48914ebe97343722aed901b3cf5/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521c92e2d7b1aadf5b5e810c555f1ec8cf2e48914ebe97343722aed901b3cf5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521c92e2d7b1aadf5b5e810c555f1ec8cf2e48914ebe97343722aed901b3cf5/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:11 np0005549474 podman[256991]: 2025-12-07 10:02:11.082416651 +0000 UTC m=+0.130467435 container init 0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  7 05:02:11 np0005549474 podman[256991]: 2025-12-07 10:02:11.090325605 +0000 UTC m=+0.138376369 container start 0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:02:11 np0005549474 python3.9[256965]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Applying nova statedir ownership
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  7 05:02:11 np0005549474 nova_compute_init[257012]: INFO:nova_statedir:Nova statedir ownership complete
Dec  7 05:02:11 np0005549474 systemd[1]: libpod-0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae.scope: Deactivated successfully.
Dec  7 05:02:11 np0005549474 podman[257013]: 2025-12-07 10:02:11.150006312 +0000 UTC m=+0.030761279 container died 0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:02:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae-userdata-shm.mount: Deactivated successfully.
Dec  7 05:02:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9521c92e2d7b1aadf5b5e810c555f1ec8cf2e48914ebe97343722aed901b3cf5-merged.mount: Deactivated successfully.
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.211 256757 INFO nova.virt.driver [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  7 05:02:11 np0005549474 podman[257024]: 2025-12-07 10:02:11.215536876 +0000 UTC m=+0.058518125 container cleanup 0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  7 05:02:11 np0005549474 systemd[1]: libpod-conmon-0b257d4777a3521c81084a3b6b90f55bcb7c31d4119965af1bf7d453dcd383ae.scope: Deactivated successfully.
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.309 256757 INFO nova.compute.provider_config [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.320 256757 DEBUG oslo_concurrency.lockutils [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.321 256757 DEBUG oslo_concurrency.lockutils [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.321 256757 DEBUG oslo_concurrency.lockutils [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.321 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.321 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.321 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.322 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.322 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.322 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.322 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.322 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.322 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.323 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.324 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.325 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.326 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.326 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.326 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.326 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.326 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.326 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.327 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.327 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.327 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.327 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.327 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.327 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.328 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.328 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.328 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.328 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.328 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.329 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.330 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.330 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.330 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.330 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.330 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.330 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.331 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.332 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.333 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.333 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.333 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.333 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.333 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.333 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.334 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.335 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.335 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.335 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.335 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.335 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.335 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.336 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.336 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.336 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.336 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.336 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.336 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.337 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.338 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.339 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.340 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.341 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.342 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.343 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.343 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.343 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.343 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.343 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.343 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.344 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.345 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.345 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.345 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.345 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.345 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.346 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.347 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.347 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.347 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.347 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.347 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.348 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.348 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.348 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.348 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.348 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.348 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.349 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.349 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.349 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.349 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.349 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.349 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.350 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.350 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.350 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.350 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.350 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.351 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.351 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.351 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.351 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.351 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.351 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.352 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.352 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.352 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.352 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.352 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.353 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.353 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.353 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.353 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.353 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.354 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.354 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.354 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.354 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.354 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.355 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.355 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.355 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.355 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.355 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.356 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.356 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.356 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.356 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.356 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.356 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.357 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.357 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.357 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.357 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.357 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.357 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.358 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.359 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.360 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.360 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.360 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.360 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.360 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.361 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.361 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.361 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.361 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.361 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.361 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.362 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.362 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.362 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.362 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.362 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.362 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.363 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.363 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.363 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.363 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.363 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.363 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.364 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.364 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.364 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.364 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.364 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.365 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.365 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.365 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.365 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.365 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.366 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.366 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.366 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.366 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.366 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.366 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.367 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.367 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.367 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.367 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.367 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.368 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.368 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.368 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.368 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.368 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.368 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.369 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.369 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.369 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.369 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.369 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.369 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.370 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.370 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.370 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.370 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.370 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.370 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.371 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.371 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.371 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.371 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.371 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.371 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.372 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.373 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.374 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.375 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.376 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.377 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.378 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.378 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.378 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.378 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.379 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.379 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.379 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.379 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.379 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.379 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.380 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.381 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.382 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.383 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.384 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.384 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.384 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.384 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.384 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.385 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.385 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.385 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.385 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.385 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.385 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.386 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.386 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.386 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.386 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.386 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.387 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.388 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.389 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.390 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.391 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.392 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.392 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.392 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.392 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.392 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.392 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.393 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.393 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.393 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.393 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.393 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.393 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.394 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.395 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.396 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.396 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.396 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.396 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.396 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.396 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.397 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.397 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.397 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.397 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.397 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.397 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.398 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.399 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.399 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.399 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.399 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.399 256757 WARNING oslo_config.cfg [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  7 05:02:11 np0005549474 nova_compute[256753]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  7 05:02:11 np0005549474 nova_compute[256753]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  7 05:02:11 np0005549474 nova_compute[256753]: and ``live_migration_inbound_addr`` respectively.
Dec  7 05:02:11 np0005549474 nova_compute[256753]: ).  Its value may be silently ignored in the future.#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.399 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.400 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.400 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.400 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.400 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.400 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.400 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.401 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.402 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.402 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.402 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.402 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rbd_secret_uuid        = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.403 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.404 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.404 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.404 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.404 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.404 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.404 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.405 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.406 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.406 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.406 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.406 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.406 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.406 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.407 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.407 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.407 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.407 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.407 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.407 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.408 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.409 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.410 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.411 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.412 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.413 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.414 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.414 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.414 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.414 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.414 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.414 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.415 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.416 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.417 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.418 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.419 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.420 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.421 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.422 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.423 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.424 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.425 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.425 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.425 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.425 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.425 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.425 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.426 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.426 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.426 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.426 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.426 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.426 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.427 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.428 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.428 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.428 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.428 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.428 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.428 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.429 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.429 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.429 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.429 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.429 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.429 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.430 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.431 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.432 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.432 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.432 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.432 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.432 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.432 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.433 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.434 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.434 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.434 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.434 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.434 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.434 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.435 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.435 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.435 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.435 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.435 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.436 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.436 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.436 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.436 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.436 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.437 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.437 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.437 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.437 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.437 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.438 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.438 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.438 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.438 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.439 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.439 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.439 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.439 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.439 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.440 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.440 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.440 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.440 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.440 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.440 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.441 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.441 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.441 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.441 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.441 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.441 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.442 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.442 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.442 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.442 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.442 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.443 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.443 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.443 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.443 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.443 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.444 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.444 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.444 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.444 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.444 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.444 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.445 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.445 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.445 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.445 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.445 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.446 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.446 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.446 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.446 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.447 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.447 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.447 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.447 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.447 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.448 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.448 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.448 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.448 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.448 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.449 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.449 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.449 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.449 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.449 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.450 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.450 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.450 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.450 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.450 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.450 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.451 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.451 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.451 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.451 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.451 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.452 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.452 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.452 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.452 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.452 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.453 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.453 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.453 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.453 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.453 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.454 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.454 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.454 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.454 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.454 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.455 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.455 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.455 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.455 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.455 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.456 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.456 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.456 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.456 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.456 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.457 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.457 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.457 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.457 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.457 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.457 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.458 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.458 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.458 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.458 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.458 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.459 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.459 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.459 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.459 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.459 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.459 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.460 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.460 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.460 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.460 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.460 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.461 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.461 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.461 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.461 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.461 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.462 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.462 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.462 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.462 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.462 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.462 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.463 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.463 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.463 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.463 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.463 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.464 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.464 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.464 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.464 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.464 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.465 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.465 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.465 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.465 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.465 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.465 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.466 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.466 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.466 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.466 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.466 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.467 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.467 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.467 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.467 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.467 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.468 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.468 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.468 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.468 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.468 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.469 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.469 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.469 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.469 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.469 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.469 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.470 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.470 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.470 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.470 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.470 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.471 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.471 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.471 256757 DEBUG oslo_service.service [None req-f8e7f670-2f63-4604-bba6-e040a45e591a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.472 256757 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.489 256757 INFO nova.virt.node [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Determined node identity 7e48a19e-1e29-4c67-8ffa-7daf855825bb from /var/lib/nova/compute_id#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.490 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.491 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.491 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.491 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.503 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd64c2c44f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.505 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd64c2c44f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.506 256757 INFO nova.virt.libvirt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.514 256757 INFO nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Libvirt host capabilities <capabilities>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <host>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <uuid>21fd7ebb-512a-4f77-9836-e8bca79e9734</uuid>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <cpu>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <arch>x86_64</arch>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model>EPYC-Rome-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <vendor>AMD</vendor>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <microcode version='16777317'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <signature family='23' model='49' stepping='0'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='x2apic'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='tsc-deadline'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='osxsave'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='hypervisor'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='tsc_adjust'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='spec-ctrl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='stibp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='arch-capabilities'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='cmp_legacy'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='topoext'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='virt-ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='lbrv'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='tsc-scale'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='vmcb-clean'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='pause-filter'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='pfthreshold'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='svme-addr-chk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='rdctl-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='skip-l1dfl-vmentry'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='mds-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature name='pschange-mc-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <pages unit='KiB' size='4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <pages unit='KiB' size='2048'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <pages unit='KiB' size='1048576'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </cpu>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <power_management>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <suspend_mem/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </power_management>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <iommu support='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <migration_features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <live/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <uri_transports>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <uri_transport>tcp</uri_transport>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <uri_transport>rdma</uri_transport>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </uri_transports>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </migration_features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <topology>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <cells num='1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <cell id='0'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          <memory unit='KiB'>7864320</memory>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          <pages unit='KiB' size='2048'>0</pages>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          <distances>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <sibling id='0' value='10'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          </distances>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          <cpus num='8'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:          </cpus>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        </cell>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </cells>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </topology>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <cache>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </cache>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <secmodel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model>selinux</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <doi>0</doi>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </secmodel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <secmodel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model>dac</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <doi>0</doi>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </secmodel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </host>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <guest>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <os_type>hvm</os_type>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <arch name='i686'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <wordsize>32</wordsize>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <domain type='qemu'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <domain type='kvm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </arch>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <pae/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <nonpae/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <acpi default='on' toggle='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <apic default='on' toggle='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <cpuselection/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <deviceboot/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <disksnapshot default='on' toggle='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <externalSnapshot/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </guest>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <guest>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <os_type>hvm</os_type>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <arch name='x86_64'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <wordsize>64</wordsize>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <domain type='qemu'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <domain type='kvm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </arch>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <acpi default='on' toggle='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <apic default='on' toggle='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <cpuselection/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <deviceboot/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <disksnapshot default='on' toggle='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <externalSnapshot/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </guest>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 
Dec  7 05:02:11 np0005549474 nova_compute[256753]: </capabilities>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: #033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.519 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.523 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  7 05:02:11 np0005549474 nova_compute[256753]: <domainCapabilities>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <path>/usr/libexec/qemu-kvm</path>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <domain>kvm</domain>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <arch>i686</arch>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <vcpu max='4096'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <iothreads supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <os supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <enum name='firmware'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <loader supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='type'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>rom</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>pflash</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='readonly'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>yes</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>no</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='secure'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>no</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </loader>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <cpu>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='host-passthrough' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='hostPassthroughMigratable'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>on</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>off</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='maximum' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='maximumMigratable'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>on</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>off</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='host-model' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <vendor>AMD</vendor>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='x2apic'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='tsc-deadline'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='hypervisor'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='tsc_adjust'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='spec-ctrl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='stibp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='cmp_legacy'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='overflow-recov'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='succor'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='ibrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='amd-ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='virt-ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='lbrv'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='tsc-scale'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='vmcb-clean'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='flushbyasid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='pause-filter'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='pfthreshold'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='svme-addr-chk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='disable' name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='custom' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-noTSX'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v5'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cooperlake'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cooperlake-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cooperlake-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mpx'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mpx'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Dhyana-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Genoa'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amd-psfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='auto-ibrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='stibp-always-on'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Genoa-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amd-psfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='auto-ibrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='stibp-always-on'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Milan'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Milan-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Milan-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amd-psfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='stibp-always-on'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='GraniteRapids'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mcdt-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pbrsb-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='prefetchiti'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='GraniteRapids-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mcdt-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pbrsb-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='prefetchiti'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='GraniteRapids-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx10'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx10-128'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx10-256'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx10-512'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mcdt-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pbrsb-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='prefetchiti'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-noTSX'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Haswell-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-noTSX'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v5'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v6'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Icelake-Server-v7'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='IvyBridge'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='IvyBridge-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='IvyBridge-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='IvyBridge-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='KnightsMill'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512er'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512pf'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='KnightsMill-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-4fmaps'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-4vnniw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512er'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512pf'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Opteron_G4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fma4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xop'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Opteron_G4-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fma4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xop'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Opteron_G5'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fma4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tbm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xop'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Opteron_G5-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fma4'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tbm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xop'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='SapphireRapids'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='SapphireRapids-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='SapphireRapids-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='SapphireRapids-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='tsx-ldtrk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='SierraForest'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cmpccxadd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mcdt-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pbrsb-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='SierraForest-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-ne-convert'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cmpccxadd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mcdt-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pbrsb-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='psdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='sbdr-ssdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='serialize'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Client-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Skylake-Server-v5'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Snowridge'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='core-capability'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mpx'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='split-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Snowridge-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='core-capability'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mpx'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='split-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Snowridge-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='core-capability'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='split-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Snowridge-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='core-capability'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='split-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Snowridge-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='cldemote'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdir64b'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='movdiri'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='athlon'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnow'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnowext'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='athlon-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnow'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnowext'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='core2duo'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='core2duo-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='coreduo'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='coreduo-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='n270'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='n270-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ss'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='phenom'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnow'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnowext'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='phenom-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnow'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='3dnowext'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <memoryBacking supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <enum name='sourceType'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <value>file</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <value>anonymous</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <value>memfd</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </memoryBacking>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <disk supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='diskDevice'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>disk</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>cdrom</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>floppy</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>lun</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='bus'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>fdc</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>scsi</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>usb</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>sata</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='model'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio-transitional</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio-non-transitional</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <graphics supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='type'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>vnc</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>egl-headless</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>dbus</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </graphics>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <video supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='modelType'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>vga</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>cirrus</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>none</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>bochs</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>ramfb</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <hostdev supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='mode'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>subsystem</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='startupPolicy'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>default</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>mandatory</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>requisite</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>optional</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='subsysType'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>usb</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>pci</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>scsi</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='capsType'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='pciBackend'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </hostdev>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <rng supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='model'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio-transitional</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtio-non-transitional</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='backendModel'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>random</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>egd</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>builtin</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <filesystem supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='driverType'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>path</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>handle</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>virtiofs</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </filesystem>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <tpm supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='model'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>tpm-tis</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>tpm-crb</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='backendModel'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>emulator</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>external</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='backendVersion'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>2.0</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </tpm>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <redirdev supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='bus'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>usb</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </redirdev>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <channel supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='type'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>pty</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>unix</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </channel>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <crypto supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='model'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='type'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>qemu</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='backendModel'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>builtin</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </crypto>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <interface supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='backendType'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>default</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>passt</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <panic supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='model'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>isa</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>hyperv</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </panic>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <console supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='type'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>null</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>vc</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>pty</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>dev</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>file</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>pipe</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>stdio</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>udp</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>tcp</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>unix</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>qemu-vdagent</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>dbus</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </console>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <gic supported='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <vmcoreinfo supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <genid supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <backingStoreInput supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <backup supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <async-teardown supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <ps2 supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <sev supported='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <sgx supported='no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <hyperv supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='features'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>relaxed</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>vapic</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>spinlocks</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>vpindex</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>runtime</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>synic</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>stimer</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>reset</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>vendor_id</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>frequencies</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>reenlightenment</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>tlbflush</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>ipi</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>avic</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>emsr_bitmap</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>xmm_input</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <defaults>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <spinlocks>4095</spinlocks>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <stimer_direct>on</stimer_direct>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <tlbflush_direct>on</tlbflush_direct>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <tlbflush_extended>on</tlbflush_extended>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </defaults>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </hyperv>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <launchSecurity supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='sectype'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>tdx</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </launchSecurity>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: </domainCapabilities>
Dec  7 05:02:11 np0005549474 nova_compute[256753]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.526 256757 DEBUG nova.virt.libvirt.volume.mount [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  7 05:02:11 np0005549474 nova_compute[256753]: 2025-12-07 10:02:11.529 256757 DEBUG nova.virt.libvirt.host [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  7 05:02:11 np0005549474 nova_compute[256753]: <domainCapabilities>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <path>/usr/libexec/qemu-kvm</path>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <domain>kvm</domain>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <arch>i686</arch>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <vcpu max='240'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <iothreads supported='yes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <os supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <enum name='firmware'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <loader supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='type'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>rom</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>pflash</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='readonly'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>yes</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>no</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='secure'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>no</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </loader>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:  <cpu>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='host-passthrough' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='hostPassthroughMigratable'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>on</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>off</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='maximum' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <enum name='maximumMigratable'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>on</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <value>off</value>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </enum>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='host-model' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <vendor>AMD</vendor>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='x2apic'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='tsc-deadline'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='hypervisor'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='tsc_adjust'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='spec-ctrl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='stibp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='cmp_legacy'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='overflow-recov'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='succor'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='ibrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='amd-ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='virt-ssbd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='lbrv'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='tsc-scale'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='vmcb-clean'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='flushbyasid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='pause-filter'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='pfthreshold'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='svme-addr-chk'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <feature policy='disable' name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    </mode>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:    <mode name='custom' supported='yes'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-noTSX'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Broadwell-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cascadelake-Server-v5'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cooperlake'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cooperlake-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Cooperlake-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='rtm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='taa-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mpx'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='mpx'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Denverton-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='Dhyana-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Genoa'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amd-psfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='auto-ibrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='stibp-always-on'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Genoa-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amd-psfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='auto-ibrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='la57'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='stibp-always-on'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Milan'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Milan-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Milan-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amd-psfd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='invpcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='no-nested-data-bp'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='null-sel-clr-base'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pcid'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='pku'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='stibp-always-on'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vaes'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='vpclmulqdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome-v1'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome-v2'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-Rome-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-v3'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='EPYC-v4'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='xsaves'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      </blockers>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:      <blockers model='GraniteRapids'>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-int8'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='amx-tile'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx-vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-bf16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-fp16'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512-vpopcntdq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bitalg'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512bw'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512cd'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512dq'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512f'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512ifma'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vbmi2'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vl'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='avx512vnni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='bus-lock-detect'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='erms'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fbsdp-no'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrc'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fsrs'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='fzrm'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='gfni'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='hle'/>
Dec  7 05:02:11 np0005549474 nova_compute[256753]:        <feature name='ibrs-all'/>
Dec  7 05:02:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:39 np0005549474 rsyslogd[1010]: imjournal: 4681 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  7 05:02:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:02:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:39] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  7 05:02:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:39] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  7 05:02:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:40.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:40 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:40.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:40 np0005549474 podman[257356]: 2025-12-07 10:02:40.861625057 +0000 UTC m=+0.062976537 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec  7 05:02:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  7 05:02:40 np0005549474 podman[257357]: 2025-12-07 10:02:40.914173297 +0000 UTC m=+0.107373145 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  7 05:02:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:02:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246843581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:02:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:02:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246843581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:02:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100242 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:02:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:42.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:42 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:02:42
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.rgw.root', 'vms']
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:02:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:02:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:02:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:02:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:02:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:02:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:44.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:44 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:44.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:02:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:45 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:45 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:46.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:46 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:46.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100246 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:02:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  7 05:02:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:02:47.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:02:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:02:47.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:02:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:02:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:48.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:48 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  7 05:02:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:49 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:49 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:49] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:02:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:49] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:02:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:50.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:50 np0005549474 podman[257414]: 2025-12-07 10:02:50.272739593 +0000 UTC m=+0.081030408 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  7 05:02:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec  7 05:02:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:51 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:51 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:52.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:52 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:02:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:02:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:53 np0005549474 nova_compute[256753]: 2025-12-07 10:02:53.592 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:02:53 np0005549474 nova_compute[256753]: 2025-12-07 10:02:53.631 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:02:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:54 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:02:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:02:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:56 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:02:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:02:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:02:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:02:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:02:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:02:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:57 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.271033334 +0000 UTC m=+0.050145227 container create 6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 05:02:57 np0005549474 systemd[1]: Started libpod-conmon-6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032.scope.
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.246919978 +0000 UTC m=+0.026031941 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:02:57 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.359761181 +0000 UTC m=+0.138873084 container init 6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.370479733 +0000 UTC m=+0.149591626 container start 6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.373970708 +0000 UTC m=+0.153082621 container attach 6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:02:57 np0005549474 optimistic_leavitt[257629]: 167 167
Dec  7 05:02:57 np0005549474 systemd[1]: libpod-6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032.scope: Deactivated successfully.
Dec  7 05:02:57 np0005549474 conmon[257629]: conmon 6117fe438adb8dc331f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032.scope/container/memory.events
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.376110857 +0000 UTC m=+0.155222760 container died 6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_leavitt, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:02:57 np0005549474 systemd[1]: var-lib-containers-storage-overlay-11bb6487906cdb8766c099b8e1160b8cb9bb3ac6d7323c206085276b3288bca7-merged.mount: Deactivated successfully.
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:02:57 np0005549474 podman[257612]: 2025-12-07 10:02:57.421179044 +0000 UTC m=+0.200290937 container remove 6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:02:57 np0005549474 systemd[1]: libpod-conmon-6117fe438adb8dc331f9a1371c6535dbad557711ab20d3849199957c049e5032.scope: Deactivated successfully.
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:02:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:02:57 np0005549474 podman[257652]: 2025-12-07 10:02:57.649006339 +0000 UTC m=+0.060162049 container create 7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:02:57 np0005549474 systemd[1]: Started libpod-conmon-7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb.scope.
Dec  7 05:02:57 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:02:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57141c0db6edf52ae1c18ab9f81828dd6f16ca8ad26ebe26cd163f1732f8961c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:57 np0005549474 podman[257652]: 2025-12-07 10:02:57.628807319 +0000 UTC m=+0.039963039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:02:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57141c0db6edf52ae1c18ab9f81828dd6f16ca8ad26ebe26cd163f1732f8961c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57141c0db6edf52ae1c18ab9f81828dd6f16ca8ad26ebe26cd163f1732f8961c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57141c0db6edf52ae1c18ab9f81828dd6f16ca8ad26ebe26cd163f1732f8961c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57141c0db6edf52ae1c18ab9f81828dd6f16ca8ad26ebe26cd163f1732f8961c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:57 np0005549474 podman[257652]: 2025-12-07 10:02:57.742711982 +0000 UTC m=+0.153867692 container init 7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:02:57 np0005549474 podman[257652]: 2025-12-07 10:02:57.755606492 +0000 UTC m=+0.166762232 container start 7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:02:57 np0005549474 podman[257652]: 2025-12-07 10:02:57.759599632 +0000 UTC m=+0.170755362 container attach 7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:02:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:57 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:02:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:02:58 np0005549474 loving_varahamihira[257668]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:02:58 np0005549474 loving_varahamihira[257668]: --> All data devices are unavailable
Dec  7 05:02:58 np0005549474 systemd[1]: libpod-7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb.scope: Deactivated successfully.
Dec  7 05:02:58 np0005549474 podman[257652]: 2025-12-07 10:02:58.115464394 +0000 UTC m=+0.526620104 container died 7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_varahamihira, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:02:58 np0005549474 systemd[1]: var-lib-containers-storage-overlay-57141c0db6edf52ae1c18ab9f81828dd6f16ca8ad26ebe26cd163f1732f8961c-merged.mount: Deactivated successfully.
Dec  7 05:02:58 np0005549474 podman[257652]: 2025-12-07 10:02:58.154031035 +0000 UTC m=+0.565186775 container remove 7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:02:58 np0005549474 systemd[1]: libpod-conmon-7fe63cd5083f91bf1bc006d2c5978f0b3259ce02d1b5d31f735cb6d9470d59bb.scope: Deactivated successfully.
Dec  7 05:02:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:02:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:02:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.715782495 +0000 UTC m=+0.038166601 container create f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:02:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:02:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:02:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:02:58 np0005549474 systemd[1]: Started libpod-conmon-f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad.scope.
Dec  7 05:02:58 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.789516762 +0000 UTC m=+0.111900878 container init f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.698998897 +0000 UTC m=+0.021383053 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.796491813 +0000 UTC m=+0.118875959 container start f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.80007207 +0000 UTC m=+0.122456176 container attach f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:02:58 np0005549474 systemd[1]: libpod-f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad.scope: Deactivated successfully.
Dec  7 05:02:58 np0005549474 wonderful_proskuriakova[257804]: 167 167
Dec  7 05:02:58 np0005549474 conmon[257804]: conmon f9750772eab2fb1cd5f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad.scope/container/memory.events
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.803228756 +0000 UTC m=+0.125612862 container died f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 05:02:58 np0005549474 systemd[1]: var-lib-containers-storage-overlay-21dc30caa22d26815ede09e9ab77c37eb5c5acb5be6a36452e99d4242b3ee0e1-merged.mount: Deactivated successfully.
Dec  7 05:02:58 np0005549474 podman[257787]: 2025-12-07 10:02:58.840217653 +0000 UTC m=+0.162601759 container remove f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 05:02:58 np0005549474 systemd[1]: libpod-conmon-f9750772eab2fb1cd5f804c5f51719a09ba51bd3a975aa4505d10a2fa2c949ad.scope: Deactivated successfully.
Dec  7 05:02:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Dec  7 05:02:58 np0005549474 podman[257830]: 2025-12-07 10:02:58.994666561 +0000 UTC m=+0.046063376 container create fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hellman, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:02:59 np0005549474 systemd[1]: Started libpod-conmon-fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55.scope.
Dec  7 05:02:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:02:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2218c3ebb1aabc512f052f7b2705396ea4a5f6913474a3bc5fcc0b71888d1b8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2218c3ebb1aabc512f052f7b2705396ea4a5f6913474a3bc5fcc0b71888d1b8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2218c3ebb1aabc512f052f7b2705396ea4a5f6913474a3bc5fcc0b71888d1b8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2218c3ebb1aabc512f052f7b2705396ea4a5f6913474a3bc5fcc0b71888d1b8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:02:59 np0005549474 podman[257830]: 2025-12-07 10:02:58.976453714 +0000 UTC m=+0.027850559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:02:59 np0005549474 podman[257830]: 2025-12-07 10:02:59.080100257 +0000 UTC m=+0.131497092 container init fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hellman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:02:59 np0005549474 podman[257830]: 2025-12-07 10:02:59.086130961 +0000 UTC m=+0.137527776 container start fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 05:02:59 np0005549474 podman[257830]: 2025-12-07 10:02:59.090352717 +0000 UTC m=+0.141749552 container attach fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 05:02:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:59 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]: {
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:    "0": [
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:        {
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "devices": [
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "/dev/loop3"
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            ],
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "lv_name": "ceph_lv0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "lv_size": "21470642176",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "name": "ceph_lv0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "tags": {
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.cluster_name": "ceph",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.crush_device_class": "",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.encrypted": "0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.osd_id": "0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.type": "block",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.vdo": "0",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:                "ceph.with_tpm": "0"
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            },
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "type": "block",
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:            "vg_name": "ceph_vg0"
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:        }
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]:    ]
Dec  7 05:02:59 np0005549474 gallant_hellman[257847]: }
Dec  7 05:02:59 np0005549474 systemd[1]: libpod-fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55.scope: Deactivated successfully.
Dec  7 05:02:59 np0005549474 podman[257830]: 2025-12-07 10:02:59.384792996 +0000 UTC m=+0.436189861 container died fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:02:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2218c3ebb1aabc512f052f7b2705396ea4a5f6913474a3bc5fcc0b71888d1b8f-merged.mount: Deactivated successfully.
Dec  7 05:02:59 np0005549474 podman[257830]: 2025-12-07 10:02:59.44186858 +0000 UTC m=+0.493265435 container remove fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:02:59 np0005549474 systemd[1]: libpod-conmon-fbf3b63de1bc4d8bfac87deb1c0c29bf32d4904e8b46dd61406849fe1605ed55.scope: Deactivated successfully.
Dec  7 05:02:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:02:59 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:02:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:59] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  7 05:02:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:02:59] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.026181156 +0000 UTC m=+0.046226831 container create 0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:03:00 np0005549474 systemd[1]: Started libpod-conmon-0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab.scope.
Dec  7 05:03:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.003695003 +0000 UTC m=+0.023740698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.102010801 +0000 UTC m=+0.122056486 container init 0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.10968631 +0000 UTC m=+0.129731995 container start 0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 05:03:00 np0005549474 recursing_fermi[257981]: 167 167
Dec  7 05:03:00 np0005549474 systemd[1]: libpod-0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab.scope: Deactivated successfully.
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.113518284 +0000 UTC m=+0.133563949 container attach 0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.114314086 +0000 UTC m=+0.134359751 container died 0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 05:03:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay-19a35f9c1ed098ce170559c49bca38957b71c394b788e393a736e8ece0fc27ff-merged.mount: Deactivated successfully.
Dec  7 05:03:00 np0005549474 podman[257965]: 2025-12-07 10:03:00.153600076 +0000 UTC m=+0.173645721 container remove 0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:03:00 np0005549474 systemd[1]: libpod-conmon-0457b22f2640fb0e3a65263b9a37c6f1fc077d4ab6609d7623255125c6db2aab.scope: Deactivated successfully.
Dec  7 05:03:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:00 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:00 np0005549474 podman[258008]: 2025-12-07 10:03:00.341127523 +0000 UTC m=+0.050026034 container create 70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:03:00 np0005549474 systemd[1]: Started libpod-conmon-70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9.scope.
Dec  7 05:03:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:03:00 np0005549474 podman[258008]: 2025-12-07 10:03:00.32225393 +0000 UTC m=+0.031152441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:03:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b307a738563ff753b7e4a28f08c91cd6fe3bb44eec21a02e4274a50ce1edd931/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:03:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b307a738563ff753b7e4a28f08c91cd6fe3bb44eec21a02e4274a50ce1edd931/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:03:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b307a738563ff753b7e4a28f08c91cd6fe3bb44eec21a02e4274a50ce1edd931/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:03:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b307a738563ff753b7e4a28f08c91cd6fe3bb44eec21a02e4274a50ce1edd931/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:03:00 np0005549474 podman[258008]: 2025-12-07 10:03:00.433801357 +0000 UTC m=+0.142699868 container init 70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 05:03:00 np0005549474 podman[258008]: 2025-12-07 10:03:00.441851267 +0000 UTC m=+0.150749768 container start 70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 05:03:00 np0005549474 podman[258008]: 2025-12-07 10:03:00.448744195 +0000 UTC m=+0.157642706 container attach 70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec  7 05:03:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:00.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:03:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:03:01 np0005549474 lvm[258125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:03:01 np0005549474 lvm[258125]: VG ceph_vg0 finished
Dec  7 05:03:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:01 np0005549474 agitated_galois[258025]: {}
Dec  7 05:03:01 np0005549474 systemd[1]: libpod-70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9.scope: Deactivated successfully.
Dec  7 05:03:01 np0005549474 podman[258008]: 2025-12-07 10:03:01.165388433 +0000 UTC m=+0.874286964 container died 70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:03:01 np0005549474 systemd[1]: libpod-70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9.scope: Consumed 1.158s CPU time.
Dec  7 05:03:01 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b307a738563ff753b7e4a28f08c91cd6fe3bb44eec21a02e4274a50ce1edd931-merged.mount: Deactivated successfully.
Dec  7 05:03:01 np0005549474 podman[258008]: 2025-12-07 10:03:01.209509515 +0000 UTC m=+0.918408016 container remove 70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 05:03:01 np0005549474 systemd[1]: libpod-conmon-70555a81f6ab42291c91ac46d188cfc6f05696d49a4c9dcbe726036d448a0ad9.scope: Deactivated successfully.
Dec  7 05:03:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:03:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:03:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:03:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:03:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:02 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:03:02 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:03:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:02 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:03:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:03 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:03 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:04 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:03:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:05 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:05 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:06 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:03:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100306 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:03:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:03:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:07 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:07 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:08 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:08.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:03:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:09 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:09 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:09] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  7 05:03:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:09] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec  7 05:03:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:10.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:10 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:10.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.755 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.768 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.768 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.768 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.769 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.769 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.769 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.770 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.770 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.770 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.803 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.803 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.803 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.803 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:03:10 np0005549474 nova_compute[256753]: 2025-12-07 10:03:10.804 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:03:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 05:03:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:11 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:03:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695092401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.248 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:03:11 np0005549474 podman[258201]: 2025-12-07 10:03:11.263044249 +0000 UTC m=+0.067401667 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  7 05:03:11 np0005549474 podman[258202]: 2025-12-07 10:03:11.318044507 +0000 UTC m=+0.123051013 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.393 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.394 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4885MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.395 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.395 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.535 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.535 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:03:11 np0005549474 nova_compute[256753]: 2025-12-07 10:03:11.589 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:03:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:11 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:03:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007945682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:03:12 np0005549474 nova_compute[256753]: 2025-12-07 10:03:12.090 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:03:12 np0005549474 nova_compute[256753]: 2025-12-07 10:03:12.098 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:03:12 np0005549474 nova_compute[256753]: 2025-12-07 10:03:12.121 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:03:12 np0005549474 nova_compute[256753]: 2025-12-07 10:03:12.123 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:03:12 np0005549474 nova_compute[256753]: 2025-12-07 10:03:12.123 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:03:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:12 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:03:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:03:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:12.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:03:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:13 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:13 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100314 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:03:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:14 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:14.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:03:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:15 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:15 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:16.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:16 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:03:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:17.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:03:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:17 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:17 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:18.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:18 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:18.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:03:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:19 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:19 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:03:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:03:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:20.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:20 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:20 np0005549474 radosgw[96353]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec  7 05:03:20 np0005549474 radosgw[96353]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  7 05:03:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:20.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:03:21 np0005549474 podman[258305]: 2025-12-07 10:03:21.0730191 +0000 UTC m=+0.062464552 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:03:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:21 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:21 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:22.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:03:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:03:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:23 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:23 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:03:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:24.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:03:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 597 B/s wr, 166 op/s
Dec  7 05:03:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:03:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:03:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:03:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:26.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:26 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 597 B/s wr, 166 op/s
Dec  7 05:03:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:27.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:03:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:03:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:03:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:28.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:28.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 597 B/s wr, 166 op/s
Dec  7 05:03:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:03:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:29 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:29 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:29] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:03:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:29] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:03:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:30.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 1023 B/s wr, 168 op/s
Dec  7 05:03:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:31 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:31 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:32.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:32 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:32.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 1023 B/s wr, 168 op/s
Dec  7 05:03:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:33 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:33 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100334 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:03:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:34.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:34 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:34.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 1023 B/s wr, 168 op/s
Dec  7 05:03:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:35 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:35 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8004580 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:36.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:36 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 05:03:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100336 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:03:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:37.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:03:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:37.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:03:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:37 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003cb0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:37 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:38 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8000b60 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:03:38.615 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:03:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:03:38.616 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:03:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:03:38.616 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:03:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 05:03:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:39] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:03:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:39] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:03:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:40 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 05:03:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a80016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24592 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24734 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:03:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24734 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec  7 05:03:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a80016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:42 np0005549474 podman[258373]: 2025-12-07 10:03:42.283011644 +0000 UTC m=+0.086949969 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:03:42 np0005549474 podman[258374]: 2025-12-07 10:03:42.319659562 +0000 UTC m=+0.127213705 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:03:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:42 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:03:42
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', '.nfs', 'images', 'default.rgw.control']
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:03:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:03:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:03:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:03:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:03:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:44 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a80016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 1 op/s
Dec  7 05:03:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:45 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:45 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:46 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:03:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:47.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:03:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a80016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:03:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0002830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:48 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:48.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:03:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:49 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:49 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8002f00 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:49] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:03:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:49] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:03:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:50.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:03:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:03:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:50.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 05:03:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:51 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:51 np0005549474 podman[258428]: 2025-12-07 10:03:51.263556212 +0000 UTC m=+0.078609662 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  7 05:03:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:51 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:52.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:52 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8002f00 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:52.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  7 05:03:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8002f00 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:03:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:54 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:54.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:03:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8002f00 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:56.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:56 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:56.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:03:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:03:57.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:03:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:57 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:03:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:03:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:03:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:57 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:03:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:58.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:03:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:03:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:03:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:58.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:03:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:03:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100358 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:03:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:59 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:03:59 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:03:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:59] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:03:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:03:59] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24613 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24616 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.24613 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec  7 05:04:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:00.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:00 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:04:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:02.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:02 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861232648' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:04:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861232648' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:04:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.04099707 +0000 UTC m=+0.052567723 container create 058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 05:04:03 np0005549474 systemd[1]: Started libpod-conmon-058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4.scope.
Dec  7 05:04:03 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.015806604 +0000 UTC m=+0.027377277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.114686977 +0000 UTC m=+0.126257650 container init 058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.122478669 +0000 UTC m=+0.134049292 container start 058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sanderson, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.125341807 +0000 UTC m=+0.136912440 container attach 058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sanderson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:04:03 np0005549474 romantic_sanderson[258676]: 167 167
Dec  7 05:04:03 np0005549474 systemd[1]: libpod-058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4.scope: Deactivated successfully.
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.131434213 +0000 UTC m=+0.143004876 container died 058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sanderson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 05:04:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:03 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e7a97ea142c0bcfb590696e3ec579a2129222f23ab71306f4e4f9caaa98e6728-merged.mount: Deactivated successfully.
Dec  7 05:04:03 np0005549474 podman[258660]: 2025-12-07 10:04:03.178272759 +0000 UTC m=+0.189843382 container remove 058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sanderson, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 05:04:03 np0005549474 systemd[1]: libpod-conmon-058064ec55b292e91dd28a669004ba8bfad9bdf4ed34f52422c56e9ebdb177c4.scope: Deactivated successfully.
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.347531378 +0000 UTC m=+0.046481696 container create 36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:04:03 np0005549474 systemd[1]: Started libpod-conmon-36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27.scope.
Dec  7 05:04:03 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.328908522 +0000 UTC m=+0.027858880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:04:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7674ce7b4e51a68f45069b45498578497728d45311bba0ba9ff0dbfad4d59478/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7674ce7b4e51a68f45069b45498578497728d45311bba0ba9ff0dbfad4d59478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7674ce7b4e51a68f45069b45498578497728d45311bba0ba9ff0dbfad4d59478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7674ce7b4e51a68f45069b45498578497728d45311bba0ba9ff0dbfad4d59478/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7674ce7b4e51a68f45069b45498578497728d45311bba0ba9ff0dbfad4d59478/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.446663809 +0000 UTC m=+0.145614157 container init 36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.460759993 +0000 UTC m=+0.159710301 container start 36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:04:03 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:03 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:03 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.465314556 +0000 UTC m=+0.164264974 container attach 36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ganguly, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 05:04:03 np0005549474 adoring_ganguly[258719]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:04:03 np0005549474 adoring_ganguly[258719]: --> All data devices are unavailable
Dec  7 05:04:03 np0005549474 systemd[1]: libpod-36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27.scope: Deactivated successfully.
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.816442561 +0000 UTC m=+0.515392869 container died 36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 05:04:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7674ce7b4e51a68f45069b45498578497728d45311bba0ba9ff0dbfad4d59478-merged.mount: Deactivated successfully.
Dec  7 05:04:03 np0005549474 podman[258702]: 2025-12-07 10:04:03.853339195 +0000 UTC m=+0.552289493 container remove 36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:04:03 np0005549474 systemd[1]: libpod-conmon-36115246ec52d95f72c9d939af959b124560c7063aaaff447bdd802b7bd11e27.scope: Deactivated successfully.
Dec  7 05:04:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:03 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:04 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a8003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.440100806 +0000 UTC m=+0.041549011 container create d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:04:04 np0005549474 systemd[1]: Started libpod-conmon-d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d.scope.
Dec  7 05:04:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.423428862 +0000 UTC m=+0.024877087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.515634234 +0000 UTC m=+0.117082489 container init d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.521947436 +0000 UTC m=+0.123395641 container start d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.524444825 +0000 UTC m=+0.125893070 container attach d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:04:04 np0005549474 boring_lederberg[258853]: 167 167
Dec  7 05:04:04 np0005549474 systemd[1]: libpod-d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d.scope: Deactivated successfully.
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.525908314 +0000 UTC m=+0.127356519 container died d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 05:04:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b924ddb86c34fa3914f8fb3f874c159668cd2b43a0ab9040036904d2a18248ba-merged.mount: Deactivated successfully.
Dec  7 05:04:04 np0005549474 podman[258836]: 2025-12-07 10:04:04.55844122 +0000 UTC m=+0.159889425 container remove d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_lederberg, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 05:04:04 np0005549474 systemd[1]: libpod-conmon-d1ded81f78ce39f84b990bf42a673278516951ab78c3d3b292a89d7b147cf01d.scope: Deactivated successfully.
Dec  7 05:04:04 np0005549474 podman[258878]: 2025-12-07 10:04:04.704583961 +0000 UTC m=+0.036703971 container create 9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:04:04 np0005549474 systemd[1]: Started libpod-conmon-9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252.scope.
Dec  7 05:04:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:04:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a321692ccdd928406a98b1d937d4faf8ba8b5f48f0864f760d4a799a70a937/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a321692ccdd928406a98b1d937d4faf8ba8b5f48f0864f760d4a799a70a937/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a321692ccdd928406a98b1d937d4faf8ba8b5f48f0864f760d4a799a70a937/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a321692ccdd928406a98b1d937d4faf8ba8b5f48f0864f760d4a799a70a937/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:04 np0005549474 podman[258878]: 2025-12-07 10:04:04.77836629 +0000 UTC m=+0.110486380 container init 9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 05:04:04 np0005549474 podman[258878]: 2025-12-07 10:04:04.688876722 +0000 UTC m=+0.020996752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:04:04 np0005549474 podman[258878]: 2025-12-07 10:04:04.786308876 +0000 UTC m=+0.118428896 container start 9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:04:04 np0005549474 podman[258878]: 2025-12-07 10:04:04.789985606 +0000 UTC m=+0.122105676 container attach 9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cohen, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:04:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec  7 05:04:05 np0005549474 objective_cohen[258895]: {
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:    "0": [
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:        {
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "devices": [
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "/dev/loop3"
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            ],
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "lv_name": "ceph_lv0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "lv_size": "21470642176",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "name": "ceph_lv0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "tags": {
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.cluster_name": "ceph",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.crush_device_class": "",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.encrypted": "0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.osd_id": "0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.type": "block",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.vdo": "0",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:                "ceph.with_tpm": "0"
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            },
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "type": "block",
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:            "vg_name": "ceph_vg0"
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:        }
Dec  7 05:04:05 np0005549474 objective_cohen[258895]:    ]
Dec  7 05:04:05 np0005549474 objective_cohen[258895]: }
Dec  7 05:04:05 np0005549474 systemd[1]: libpod-9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252.scope: Deactivated successfully.
Dec  7 05:04:05 np0005549474 podman[258878]: 2025-12-07 10:04:05.098234242 +0000 UTC m=+0.430354272 container died 9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 05:04:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b1a321692ccdd928406a98b1d937d4faf8ba8b5f48f0864f760d4a799a70a937-merged.mount: Deactivated successfully.
Dec  7 05:04:05 np0005549474 podman[258878]: 2025-12-07 10:04:05.135670602 +0000 UTC m=+0.467790642 container remove 9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:04:05 np0005549474 systemd[1]: libpod-conmon-9e232c60574e19150c5bf1c9dd60ba93c0ecf3e85f6efa67be301d17ed97a252.scope: Deactivated successfully.
Dec  7 05:04:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:05 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.695749057 +0000 UTC m=+0.064089607 container create 741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lumiere, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 05:04:05 np0005549474 systemd[1]: Started libpod-conmon-741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36.scope.
Dec  7 05:04:05 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.67460557 +0000 UTC m=+0.042946160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.777452352 +0000 UTC m=+0.145792942 container init 741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.784233596 +0000 UTC m=+0.152574166 container start 741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.788247986 +0000 UTC m=+0.156588586 container attach 741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:04:05 np0005549474 compassionate_lumiere[259025]: 167 167
Dec  7 05:04:05 np0005549474 systemd[1]: libpod-741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36.scope: Deactivated successfully.
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.791458883 +0000 UTC m=+0.159799503 container died 741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 05:04:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-df5f8d5af3d633dff44ba28faf11fa1716f3b921ea9c0b9d8673155a504e70ea-merged.mount: Deactivated successfully.
Dec  7 05:04:05 np0005549474 podman[259009]: 2025-12-07 10:04:05.844757885 +0000 UTC m=+0.213098445 container remove 741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 05:04:05 np0005549474 systemd[1]: libpod-conmon-741fbfa15425f8b3e7481a4613b3544a1c4ebaf64f42a7725b0894a642116f36.scope: Deactivated successfully.
Dec  7 05:04:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:05 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.055906556 +0000 UTC m=+0.049569861 container create c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:04:06 np0005549474 systemd[1]: Started libpod-conmon-c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577.scope.
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.036380474 +0000 UTC m=+0.030043829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:04:06 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:04:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b684f531b1494430909e934b6165818e7faeb8b4498e457fc069992b602d777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b684f531b1494430909e934b6165818e7faeb8b4498e457fc069992b602d777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b684f531b1494430909e934b6165818e7faeb8b4498e457fc069992b602d777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:06 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b684f531b1494430909e934b6165818e7faeb8b4498e457fc069992b602d777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.165070879 +0000 UTC m=+0.158734294 container init c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.173432757 +0000 UTC m=+0.167096062 container start c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.176767097 +0000 UTC m=+0.170430442 container attach c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 05:04:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:06 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:06 np0005549474 lvm[259138]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:04:06 np0005549474 lvm[259138]: VG ceph_vg0 finished
Dec  7 05:04:06 np0005549474 keen_solomon[259064]: {}
Dec  7 05:04:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:06.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:06 np0005549474 systemd[1]: libpod-c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577.scope: Deactivated successfully.
Dec  7 05:04:06 np0005549474 systemd[1]: libpod-c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577.scope: Consumed 1.109s CPU time.
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.854688001 +0000 UTC m=+0.848351346 container died c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_solomon, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 05:04:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4b684f531b1494430909e934b6165818e7faeb8b4498e457fc069992b602d777-merged.mount: Deactivated successfully.
Dec  7 05:04:06 np0005549474 podman[259048]: 2025-12-07 10:04:06.90970977 +0000 UTC m=+0.903373115 container remove c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 05:04:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:04:06 np0005549474 systemd[1]: libpod-conmon-c23b1cced0fcae052d6278f50ba6e03d1d441df8035fc8dcfae9ae52a0722577.scope: Deactivated successfully.
Dec  7 05:04:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:04:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:04:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:07.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:04:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:07.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:04:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:07 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:07 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0003240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:07 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:07 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:04:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:08 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:08.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  7 05:04:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:09 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002690 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:09 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b80014d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:09] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:04:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:09] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:04:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:10 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:10.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 05:04:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:11 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:11 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002690 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.117 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.182 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.184 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.184 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.185 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:04:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:12.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:12 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b80014d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:04:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:04:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.825 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.825 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.825 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.826 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.826 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:04:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:12.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.875 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.875 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.876 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.876 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:04:12 np0005549474 nova_compute[256753]: 2025-12-07 10:04:12.877 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:04:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:13 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:13 np0005549474 podman[259210]: 2025-12-07 10:04:13.294183607 +0000 UTC m=+0.094625902 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  7 05:04:13 np0005549474 podman[259211]: 2025-12-07 10:04:13.317669729 +0000 UTC m=+0.119187573 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  7 05:04:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:04:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196889449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.345 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.571 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.574 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4934MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.574 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.575 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.672 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.672 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:04:13 np0005549474 nova_compute[256753]: 2025-12-07 10:04:13.704 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:04:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:13 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:04:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674510480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:04:14 np0005549474 nova_compute[256753]: 2025-12-07 10:04:14.193 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:04:14 np0005549474 nova_compute[256753]: 2025-12-07 10:04:14.202 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:04:14 np0005549474 nova_compute[256753]: 2025-12-07 10:04:14.221 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:04:14 np0005549474 nova_compute[256753]: 2025-12-07 10:04:14.224 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:04:14 np0005549474 nova_compute[256753]: 2025-12-07 10:04:14.224 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:04:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:14.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:14 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002690 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:14.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 05:04:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:15 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001670 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:15 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100416 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:04:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:16.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:16 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:04:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:17.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:04:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:17 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:17 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4002690 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:18.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:18 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:04:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:19 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001670 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:19 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003430 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:19] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:04:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:19] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:04:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:20.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:20 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:20.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:21 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:21 np0005549474 podman[259313]: 2025-12-07 10:04:21.470851261 +0000 UTC m=+0.082342066 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  7 05:04:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:21 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001670 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:22.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003430 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:22.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:04:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:23 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:23 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001670 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:04:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:04:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003430 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:26.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:26 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.003000081s ======
Dec  7 05:04:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:26.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Dec  7 05:04:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:04:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:27.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:04:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:27.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:04:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001670 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:04:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:04:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:04:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:04:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003430 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:04:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:04:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:28.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec  7 05:04:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:29 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:29 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001670 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:29] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:04:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:29] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:04:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:04:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:30.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:04:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003430 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:04:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:30.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:04:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:31 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003cc0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:31 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:32.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:32 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec  7 05:04:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:33 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4003430 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:33 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:34.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:34 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:34.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:04:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:35 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:35 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0000b60 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100436 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:04:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:36.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:36 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:36.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:04:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:37.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:04:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:37.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:04:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:37 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:37 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:38 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:04:38.616 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:04:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:04:38.616 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:04:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:04:38.617 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:04:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:38.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:04:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:39] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:04:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:39] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:04:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:40 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 05:04:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:04:42
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', 'images', 'volumes', '.mgr', '.nfs', 'cephfs.cephfs.data']
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:04:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:04:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:04:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:42 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004340 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:04:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:04:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:04:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00016a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:44 np0005549474 podman[259384]: 2025-12-07 10:04:44.289091023 +0000 UTC m=+0.089118370 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  7 05:04:44 np0005549474 podman[259385]: 2025-12-07 10:04:44.320523314 +0000 UTC m=+0.114080214 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:04:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:44 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:44 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 05:04:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:44.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:04:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:45 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004360 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:46.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:46 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:46 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0002b10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:47.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:04:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:47.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:04:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:48.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:48 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004380 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:48 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:49 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0002b10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:49] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:04:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:49] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:04:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00043a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:51 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003ce0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:52 np0005549474 podman[259437]: 2025-12-07 10:04:52.264611501 +0000 UTC m=+0.075959771 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  7 05:04:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:52 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0002b10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:52 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:52 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:04:52.899 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:04:52 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:04:52.900 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:04:52 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:04:52.901 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:04:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:04:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:04:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00043c0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:54 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003d00 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:54.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:54 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:04:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:56 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00043e0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:56.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:56 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003d20 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:56.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:04:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:04:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:57 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:04:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:04:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:04:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004400 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:04:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:04:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:58.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:04:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:04:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:04:59 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003d40 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:04:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:59] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:04:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:04:59] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:05:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:00 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:00 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:00.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004420 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:02 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003d60 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:02 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:05:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/191301356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:05:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:05:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/191301356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:05:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:02.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:03 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:04 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:05:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:04.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:05:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:04 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003d80 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:04.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:05:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:05 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0003c10 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:06 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004440 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:06 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:06.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:07.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:05:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:07.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:05:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:07.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:05:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:07 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 05:05:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 05:05:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:08 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:08 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:08 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 05:05:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:09 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004000 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:09] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:05:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:09] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:10 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:10 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:11 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:11 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:05:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=cleanup t=2025-12-07T10:05:11.70859804Z level=info msg="Completed cleanup jobs" duration=57.680179ms
Dec  7 05:05:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana.update.checker t=2025-12-07T10:05:11.790558444Z level=info msg="Update check succeeded" duration=54.045389ms
Dec  7 05:05:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugins.update.checker t=2025-12-07T10:05:11.791636454Z level=info msg="Update check succeeded" duration=60.367583ms
Dec  7 05:05:11 np0005549474 podman[259680]: 2025-12-07 10:05:11.864144779 +0000 UTC m=+0.053512497 container create ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_haslett, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 05:05:11 np0005549474 systemd[1]: Started libpod-conmon-ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea.scope.
Dec  7 05:05:11 np0005549474 podman[259680]: 2025-12-07 10:05:11.836266736 +0000 UTC m=+0.025634524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:05:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:05:11 np0005549474 podman[259680]: 2025-12-07 10:05:11.961693668 +0000 UTC m=+0.151061436 container init ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_haslett, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:05:11 np0005549474 podman[259680]: 2025-12-07 10:05:11.973810071 +0000 UTC m=+0.163177799 container start ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 05:05:11 np0005549474 podman[259680]: 2025-12-07 10:05:11.977749368 +0000 UTC m=+0.167117146 container attach ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_haslett, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:05:11 np0005549474 peaceful_haslett[259696]: 167 167
Dec  7 05:05:11 np0005549474 systemd[1]: libpod-ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea.scope: Deactivated successfully.
Dec  7 05:05:11 np0005549474 conmon[259696]: conmon ca8398c0326a0c7f368e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea.scope/container/memory.events
Dec  7 05:05:11 np0005549474 podman[259680]: 2025-12-07 10:05:11.983098215 +0000 UTC m=+0.172465973 container died ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_haslett, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:05:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d70e41fc1503d943bc8c99cd3ba5f257d71b1067737542195885e6d3348d50c5-merged.mount: Deactivated successfully.
Dec  7 05:05:12 np0005549474 podman[259680]: 2025-12-07 10:05:12.042186562 +0000 UTC m=+0.231554290 container remove ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_haslett, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:05:12 np0005549474 systemd[1]: libpod-conmon-ca8398c0326a0c7f368ecd4aa600ffb6601193ab8dd633b8a5d6cdbab48262ea.scope: Deactivated successfully.
Dec  7 05:05:12 np0005549474 nova_compute[256753]: 2025-12-07 10:05:12.152 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:12 np0005549474 nova_compute[256753]: 2025-12-07 10:05:12.153 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.302599681 +0000 UTC m=+0.071587681 container create a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 05:05:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:12 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0041a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:12 np0005549474 systemd[1]: Started libpod-conmon-a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109.scope.
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.274175333 +0000 UTC m=+0.043163403 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:05:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:05:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154a1f395d5acfbea23b26a4fd2871ce448c68329e407fa5ad01473c2aa769ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154a1f395d5acfbea23b26a4fd2871ce448c68329e407fa5ad01473c2aa769ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154a1f395d5acfbea23b26a4fd2871ce448c68329e407fa5ad01473c2aa769ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154a1f395d5acfbea23b26a4fd2871ce448c68329e407fa5ad01473c2aa769ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/154a1f395d5acfbea23b26a4fd2871ce448c68329e407fa5ad01473c2aa769ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.403323908 +0000 UTC m=+0.172311998 container init a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 05:05:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:05:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.420256092 +0000 UTC m=+0.189244122 container start a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galileo, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.425388272 +0000 UTC m=+0.194376302 container attach a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galileo, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 05:05:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:12 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:05:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:12 np0005549474 nova_compute[256753]: 2025-12-07 10:05:12.750 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:12 np0005549474 nova_compute[256753]: 2025-12-07 10:05:12.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:12 np0005549474 nova_compute[256753]: 2025-12-07 10:05:12.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:12 np0005549474 nova_compute[256753]: 2025-12-07 10:05:12.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:05:12 np0005549474 jovial_galileo[259736]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:05:12 np0005549474 jovial_galileo[259736]: --> All data devices are unavailable
Dec  7 05:05:12 np0005549474 systemd[1]: libpod-a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109.scope: Deactivated successfully.
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.804835759 +0000 UTC m=+0.573823759 container died a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:05:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-154a1f395d5acfbea23b26a4fd2871ce448c68329e407fa5ad01473c2aa769ec-merged.mount: Deactivated successfully.
Dec  7 05:05:12 np0005549474 podman[259719]: 2025-12-07 10:05:12.870364573 +0000 UTC m=+0.639352603 container remove a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galileo, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 05:05:12 np0005549474 systemd[1]: libpod-conmon-a6762830fc97278d13d267cd5e1258ccfebcc67095ff424a99e9cab162b4f109.scope: Deactivated successfully.
Dec  7 05:05:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:13 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.643079736 +0000 UTC m=+0.078762468 container create 2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:05:13 np0005549474 systemd[1]: Started libpod-conmon-2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397.scope.
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.608253202 +0000 UTC m=+0.043935974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:05:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.751268117 +0000 UTC m=+0.186950819 container init 2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brahmagupta, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.765066245 +0000 UTC m=+0.200748987 container start 2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.767 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.768 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.768 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.770111643 +0000 UTC m=+0.205794365 container attach 2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brahmagupta, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 05:05:13 np0005549474 recursing_brahmagupta[259871]: 167 167
Dec  7 05:05:13 np0005549474 systemd[1]: libpod-2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397.scope: Deactivated successfully.
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.774863984 +0000 UTC m=+0.210546716 container died 2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.785 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.786 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.786 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.786 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:05:13 np0005549474 nova_compute[256753]: 2025-12-07 10:05:13.786 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:05:13 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b7e9e55681f4ad31324ee15f5ac3d2abe060b9cceb9834e3ecaa0108bda74f95-merged.mount: Deactivated successfully.
Dec  7 05:05:13 np0005549474 podman[259855]: 2025-12-07 10:05:13.828123572 +0000 UTC m=+0.263806314 container remove 2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:05:13 np0005549474 systemd[1]: libpod-conmon-2ca9ee193bf465cac15ded265f0e43de8852a6f88bfa9746dc2ef27594a04397.scope: Deactivated successfully.
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.068418849 +0000 UTC m=+0.071269162 container create 333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 05:05:14 np0005549474 systemd[1]: Started libpod-conmon-333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663.scope.
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.03230292 +0000 UTC m=+0.035153283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:05:14 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:05:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7636341d187d51fd034e989450b5ea15c79143b85cb34cae11eed9b359906f2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7636341d187d51fd034e989450b5ea15c79143b85cb34cae11eed9b359906f2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7636341d187d51fd034e989450b5ea15c79143b85cb34cae11eed9b359906f2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:14 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7636341d187d51fd034e989450b5ea15c79143b85cb34cae11eed9b359906f2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.173941588 +0000 UTC m=+0.176791911 container init 333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.179521991 +0000 UTC m=+0.182372274 container start 333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.183282933 +0000 UTC m=+0.186133466 container attach 333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:05:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:14 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00040f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:14.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:05:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265336556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.382 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:05:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:14 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0041c0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:14 np0005549474 keen_swanson[259931]: {
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:    "0": [
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:        {
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "devices": [
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "/dev/loop3"
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            ],
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "lv_name": "ceph_lv0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "lv_size": "21470642176",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "name": "ceph_lv0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "tags": {
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.cluster_name": "ceph",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.crush_device_class": "",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.encrypted": "0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.osd_id": "0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.type": "block",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.vdo": "0",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:                "ceph.with_tpm": "0"
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            },
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "type": "block",
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:            "vg_name": "ceph_vg0"
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:        }
Dec  7 05:05:14 np0005549474 keen_swanson[259931]:    ]
Dec  7 05:05:14 np0005549474 keen_swanson[259931]: }
Dec  7 05:05:14 np0005549474 systemd[1]: libpod-333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663.scope: Deactivated successfully.
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.525127781 +0000 UTC m=+0.527978064 container died 333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:05:14 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7636341d187d51fd034e989450b5ea15c79143b85cb34cae11eed9b359906f2f-merged.mount: Deactivated successfully.
Dec  7 05:05:14 np0005549474 podman[259915]: 2025-12-07 10:05:14.579819919 +0000 UTC m=+0.582670192 container remove 333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:05:14 np0005549474 systemd[1]: libpod-conmon-333419ce8adeb1a81416a32cbd832ae4333f4d13018576106acab82ec616e663.scope: Deactivated successfully.
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.645 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.647 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4884MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.647 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.648 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:05:14 np0005549474 podman[259943]: 2025-12-07 10:05:14.650917045 +0000 UTC m=+0.092121863 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:05:14 np0005549474 podman[259945]: 2025-12-07 10:05:14.686283013 +0000 UTC m=+0.123477621 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.716 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.717 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:05:14 np0005549474 nova_compute[256753]: 2025-12-07 10:05:14.744 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:05:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:05:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:05:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:05:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:15 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:05:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006115281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:05:15 np0005549474 nova_compute[256753]: 2025-12-07 10:05:15.234 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:05:15 np0005549474 nova_compute[256753]: 2025-12-07 10:05:15.244 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:05:15 np0005549474 nova_compute[256753]: 2025-12-07 10:05:15.264 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:05:15 np0005549474 nova_compute[256753]: 2025-12-07 10:05:15.267 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:05:15 np0005549474 nova_compute[256753]: 2025-12-07 10:05:15.268 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:05:15 np0005549474 podman[260110]: 2025-12-07 10:05:15.262399984 +0000 UTC m=+0.037427166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:05:15 np0005549474 podman[260110]: 2025-12-07 10:05:15.356038707 +0000 UTC m=+0.131065809 container create a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 05:05:15 np0005549474 systemd[1]: Started libpod-conmon-a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3.scope.
Dec  7 05:05:15 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:05:15 np0005549474 podman[260110]: 2025-12-07 10:05:15.448446907 +0000 UTC m=+0.223473989 container init a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 05:05:15 np0005549474 podman[260110]: 2025-12-07 10:05:15.456503068 +0000 UTC m=+0.231530170 container start a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec  7 05:05:15 np0005549474 podman[260110]: 2025-12-07 10:05:15.460552178 +0000 UTC m=+0.235579260 container attach a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:05:15 np0005549474 hardcore_solomon[260128]: 167 167
Dec  7 05:05:15 np0005549474 systemd[1]: libpod-a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3.scope: Deactivated successfully.
Dec  7 05:05:15 np0005549474 podman[260133]: 2025-12-07 10:05:15.545239947 +0000 UTC m=+0.057820854 container died a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 05:05:15 np0005549474 systemd[1]: var-lib-containers-storage-overlay-28f3eecda5e2f4be297db09d20d45b5cd18c8f566f312b20512b4451e7f9eb0b-merged.mount: Deactivated successfully.
Dec  7 05:05:15 np0005549474 podman[260133]: 2025-12-07 10:05:15.590323001 +0000 UTC m=+0.102903858 container remove a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 05:05:15 np0005549474 systemd[1]: libpod-conmon-a4e6e336bd97da49f2ae1cfc5f14331aa4785ac07c0abc14d8b6ba41ebe87eb3.scope: Deactivated successfully.
Dec  7 05:05:15 np0005549474 podman[260155]: 2025-12-07 10:05:15.842163775 +0000 UTC m=+0.073873954 container create 68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:05:15 np0005549474 systemd[1]: Started libpod-conmon-68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33.scope.
Dec  7 05:05:15 np0005549474 podman[260155]: 2025-12-07 10:05:15.80947166 +0000 UTC m=+0.041181839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:05:15 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:05:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bbb1d5f751e7d61700f84f604c48eeb3050e8fe2a80c306d02662929d941642/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bbb1d5f751e7d61700f84f604c48eeb3050e8fe2a80c306d02662929d941642/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bbb1d5f751e7d61700f84f604c48eeb3050e8fe2a80c306d02662929d941642/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bbb1d5f751e7d61700f84f604c48eeb3050e8fe2a80c306d02662929d941642/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:05:15 np0005549474 podman[260155]: 2025-12-07 10:05:15.97788518 +0000 UTC m=+0.209595369 container init 68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 05:05:15 np0005549474 podman[260155]: 2025-12-07 10:05:15.989551119 +0000 UTC m=+0.221261268 container start 68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:05:15 np0005549474 podman[260155]: 2025-12-07 10:05:15.993645842 +0000 UTC m=+0.225356001 container attach 68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:05:16 np0005549474 nova_compute[256753]: 2025-12-07 10:05:16.254 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:05:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:16 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:16 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:16 np0005549474 lvm[260246]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:05:16 np0005549474 lvm[260246]: VG ceph_vg0 finished
Dec  7 05:05:16 np0005549474 recursing_driscoll[260171]: {}
Dec  7 05:05:16 np0005549474 systemd[1]: libpod-68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33.scope: Deactivated successfully.
Dec  7 05:05:16 np0005549474 systemd[1]: libpod-68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33.scope: Consumed 1.293s CPU time.
Dec  7 05:05:16 np0005549474 podman[260155]: 2025-12-07 10:05:16.789732104 +0000 UTC m=+1.021442243 container died 68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_driscoll, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:05:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0bbb1d5f751e7d61700f84f604c48eeb3050e8fe2a80c306d02662929d941642-merged.mount: Deactivated successfully.
Dec  7 05:05:16 np0005549474 podman[260155]: 2025-12-07 10:05:16.829338108 +0000 UTC m=+1.061048247 container remove 68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:05:16 np0005549474 systemd[1]: libpod-conmon-68695b160b0aea21b7f48def9617204e2fd81f7973e5c9979a4a899307930c33.scope: Deactivated successfully.
Dec  7 05:05:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:05:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:05:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:17.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:05:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:17 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00040f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:17 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:05:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:18 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004200 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:18.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:18 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00040f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:18.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:19 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:05:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:05:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:20 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:20.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:20 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004220 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:21 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004220 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:23 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00040f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:23 np0005549474 podman[260321]: 2025-12-07 10:05:23.302342293 +0000 UTC m=+0.105664464 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  7 05:05:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004240 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:05:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:26 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00040f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:26.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:26 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004260 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:27.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:05:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:27.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:05:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:27.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:05:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:05:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:05:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:05:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:05:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00040f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:28.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:29 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac004280 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:29] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:05:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:29] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:05:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:05:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:05:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:30.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:31 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004110 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:32 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0042a0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:32.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:32 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:32.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:33 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:34 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004130 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:34.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:34 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0042c0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:05:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:34.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:35 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:36 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:36 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004150 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:36.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:37.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:05:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:37 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23ac0042e0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:38 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:38 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:05:38.617 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:05:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:05:38.618 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:05:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:05:38.618 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:05:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:38.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:39 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004170 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:39] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:05:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:39] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec  7 05:05:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:40 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc001080 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:40.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:40 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0001230 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:41 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:42 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004170 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:42.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:05:42
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.nfs', 'backups', 'cephfs.cephfs.meta']
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:05:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:05:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:05:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:42 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc002320 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:05:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:05:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:42.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:43 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.202520) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944202581, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2118, "num_deletes": 251, "total_data_size": 4128649, "memory_usage": 4199872, "flush_reason": "Manual Compaction"}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944237609, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4047840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20113, "largest_seqno": 22230, "table_properties": {"data_size": 4038214, "index_size": 6056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19723, "raw_average_key_size": 20, "raw_value_size": 4019136, "raw_average_value_size": 4126, "num_data_blocks": 265, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101730, "oldest_key_time": 1765101730, "file_creation_time": 1765101944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 35153 microseconds, and 15317 cpu microseconds.
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.237672) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4047840 bytes OK
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.237709) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.239904) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.239929) EVENT_LOG_v1 {"time_micros": 1765101944239922, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.239951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4119980, prev total WAL file size 4119980, number of live WAL files 2.
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.241786) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3952KB)], [44(12MB)]
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944241837, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17630500, "oldest_snapshot_seqno": -1}
Dec  7 05:05:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:44 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:44.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5557 keys, 15441137 bytes, temperature: kUnknown
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944391038, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15441137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15401302, "index_size": 24813, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 140117, "raw_average_key_size": 25, "raw_value_size": 15298230, "raw_average_value_size": 2752, "num_data_blocks": 1025, "num_entries": 5557, "num_filter_entries": 5557, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765101944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.391304) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15441137 bytes
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.392765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.1 rd, 103.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 13.0 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6077, records dropped: 520 output_compression: NoCompression
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.392787) EVENT_LOG_v1 {"time_micros": 1765101944392777, "job": 22, "event": "compaction_finished", "compaction_time_micros": 149257, "compaction_time_cpu_micros": 53438, "output_level": 6, "num_output_files": 1, "total_output_size": 15441137, "num_input_records": 6077, "num_output_records": 5557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944393764, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944396784, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.241672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:05:44 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:05:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:44 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004170 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:05:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:44.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:45 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:45 np0005549474 podman[260390]: 2025-12-07 10:05:45.449757808 +0000 UTC m=+0.075400445 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  7 05:05:45 np0005549474 podman[260392]: 2025-12-07 10:05:45.480836739 +0000 UTC m=+0.102928569 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  7 05:05:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:46 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc002320 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:46.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:46 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4001090 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:46.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:47.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:05:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:47.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:05:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:47 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004170 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:48 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:48 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003030 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:49 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b40010b0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:49] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:05:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:49] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec  7 05:05:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004170 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:50 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:50.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:51 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003030 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:52 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b40010d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:52 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004170 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:05:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:52.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:05:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:53 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:54 np0005549474 podman[260448]: 2025-12-07 10:05:54.281388888 +0000 UTC m=+0.087009582 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  7 05:05:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:54 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003030 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:54 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:05:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:54.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:55 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004190 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:56 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013d0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:56.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:56 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc003030 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:05:57.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:05:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:57 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:05:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:05:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:05:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a00041b0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:58.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:58 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c00013f0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:05:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:05:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:05:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:59.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:05:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:05:59 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:05:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:59] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:05:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:05:59] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:00 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:00.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:00 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:01 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:02 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:02.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:02 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:03 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:04 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:04.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:04 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:06:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:05.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:05 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:06 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004250 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:06 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:07.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:06:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:07 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:08 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:08.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:08 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23a0004270 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:09 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:09] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:09] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:10 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:10.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:10 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:10 np0005549474 nova_compute[256753]: 2025-12-07 10:06:10.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:11 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:11 np0005549474 nova_compute[256753]: 2025-12-07 10:06:11.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:12 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:12.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:06:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:06:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:12 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:06:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:12 np0005549474 nova_compute[256753]: 2025-12-07 10:06:12.749 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:12 np0005549474 nova_compute[256753]: 2025-12-07 10:06:12.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:13.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:13 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.773 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.774 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.775 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.775 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.776 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.796 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.796 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.796 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.796 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:06:13 np0005549474 nova_compute[256753]: 2025-12-07 10:06:13.797 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:06:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:06:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2333995897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.298 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:06:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:14 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:06:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:14.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.477 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.478 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4937MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.478 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.479 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:06:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:14 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.550 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.551 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:06:14 np0005549474 nova_compute[256753]: 2025-12-07 10:06:14.572 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:06:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:06:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:15.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:06:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3698721280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:06:15 np0005549474 nova_compute[256753]: 2025-12-07 10:06:15.071 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:06:15 np0005549474 nova_compute[256753]: 2025-12-07 10:06:15.079 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:06:15 np0005549474 nova_compute[256753]: 2025-12-07 10:06:15.098 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:06:15 np0005549474 nova_compute[256753]: 2025-12-07 10:06:15.101 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:06:15 np0005549474 nova_compute[256753]: 2025-12-07 10:06:15.101 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:06:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:15 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:16 np0005549474 nova_compute[256753]: 2025-12-07 10:06:16.080 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:16 np0005549474 nova_compute[256753]: 2025-12-07 10:06:16.204 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:06:16 np0005549474 podman[260564]: 2025-12-07 10:06:16.271014005 +0000 UTC m=+0.069108333 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  7 05:06:16 np0005549474 podman[260565]: 2025-12-07 10:06:16.307494274 +0000 UTC m=+0.110102636 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  7 05:06:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:16 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:16.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:16 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8002ad0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:17.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:17.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:06:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:17.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:06:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:17.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:06:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:17 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:06:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:06:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:06:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:06:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:06:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:18 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:18.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:18 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:19.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:19 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ff0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:06:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:06:19 np0005549474 podman[260785]: 2025-12-07 10:06:19.782613073 +0000 UTC m=+0.021926671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:06:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:19] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec  7 05:06:20 np0005549474 podman[260785]: 2025-12-07 10:06:20.035120417 +0000 UTC m=+0.274433985 container create 07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hamilton, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:06:20 np0005549474 systemd[1]: Started libpod-conmon-07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2.scope.
Dec  7 05:06:20 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:06:20 np0005549474 podman[260785]: 2025-12-07 10:06:20.151797067 +0000 UTC m=+0.391110685 container init 07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hamilton, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 05:06:20 np0005549474 podman[260785]: 2025-12-07 10:06:20.164427542 +0000 UTC m=+0.403741150 container start 07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hamilton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:06:20 np0005549474 podman[260785]: 2025-12-07 10:06:20.168675179 +0000 UTC m=+0.407988787 container attach 07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:06:20 np0005549474 mystifying_hamilton[260801]: 167 167
Dec  7 05:06:20 np0005549474 systemd[1]: libpod-07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2.scope: Deactivated successfully.
Dec  7 05:06:20 np0005549474 podman[260785]: 2025-12-07 10:06:20.174615591 +0000 UTC m=+0.413929199 container died 07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hamilton, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 05:06:20 np0005549474 systemd[1]: var-lib-containers-storage-overlay-19b04067eafcd1a555483aab6ae36a90b693f2539380e35f9cd53d947a7e2390-merged.mount: Deactivated successfully.
Dec  7 05:06:20 np0005549474 podman[260785]: 2025-12-07 10:06:20.217889165 +0000 UTC m=+0.457202743 container remove 07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec  7 05:06:20 np0005549474 systemd[1]: libpod-conmon-07def10ddc8593e441dff156cca2d973bf2935b81f75ae79127009744ca70ce2.scope: Deactivated successfully.
Dec  7 05:06:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:06:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:20 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:20.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:20 np0005549474 podman[260823]: 2025-12-07 10:06:20.419667832 +0000 UTC m=+0.047138570 container create 28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_lalande, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 05:06:20 np0005549474 systemd[1]: Started libpod-conmon-28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e.scope.
Dec  7 05:06:20 np0005549474 podman[260823]: 2025-12-07 10:06:20.401438933 +0000 UTC m=+0.028909701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:20 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:20 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:06:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559acaa6cc8681f54395ef7b6f9c518d02af56148f4196de401d92c2c9839cda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559acaa6cc8681f54395ef7b6f9c518d02af56148f4196de401d92c2c9839cda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559acaa6cc8681f54395ef7b6f9c518d02af56148f4196de401d92c2c9839cda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559acaa6cc8681f54395ef7b6f9c518d02af56148f4196de401d92c2c9839cda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:20 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559acaa6cc8681f54395ef7b6f9c518d02af56148f4196de401d92c2c9839cda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:20 np0005549474 podman[260823]: 2025-12-07 10:06:20.571597496 +0000 UTC m=+0.199068234 container init 28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 05:06:20 np0005549474 podman[260823]: 2025-12-07 10:06:20.579861842 +0000 UTC m=+0.207332570 container start 28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:06:20 np0005549474 podman[260823]: 2025-12-07 10:06:20.582486783 +0000 UTC m=+0.209957521 container attach 28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_lalande, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:06:20 np0005549474 youthful_lalande[260840]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:06:20 np0005549474 youthful_lalande[260840]: --> All data devices are unavailable
Dec  7 05:06:20 np0005549474 systemd[1]: libpod-28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e.scope: Deactivated successfully.
Dec  7 05:06:20 np0005549474 podman[260823]: 2025-12-07 10:06:20.901028732 +0000 UTC m=+0.528499470 container died 28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_lalande, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 05:06:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:21.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:21 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay-559acaa6cc8681f54395ef7b6f9c518d02af56148f4196de401d92c2c9839cda-merged.mount: Deactivated successfully.
Dec  7 05:06:21 np0005549474 podman[260823]: 2025-12-07 10:06:21.50659325 +0000 UTC m=+1.134063988 container remove 28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 05:06:21 np0005549474 systemd[1]: libpod-conmon-28d3ac5c61ac674a5ee0ad646e408166458d8b2eafea6c64955c70455304160e.scope: Deactivated successfully.
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.18430611 +0000 UTC m=+0.075406402 container create 56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bassi, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:06:22 np0005549474 systemd[1]: Started libpod-conmon-56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf.scope.
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.153835047 +0000 UTC m=+0.044935379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.301166005 +0000 UTC m=+0.192266287 container init 56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bassi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.308573658 +0000 UTC m=+0.199673940 container start 56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bassi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.312767202 +0000 UTC m=+0.203867474 container attach 56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec  7 05:06:22 np0005549474 epic_bassi[261003]: 167 167
Dec  7 05:06:22 np0005549474 systemd[1]: libpod-56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf.scope: Deactivated successfully.
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.314362716 +0000 UTC m=+0.205463008 container died 56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bassi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:06:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5c61402e3154d064a85d71bbf508d81887f88e4cfdf5004414e6c6e810d1b3b4-merged.mount: Deactivated successfully.
Dec  7 05:06:22 np0005549474 podman[260987]: 2025-12-07 10:06:22.362088481 +0000 UTC m=+0.253188763 container remove 56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bassi, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:06:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ff0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:22 np0005549474 systemd[1]: libpod-conmon-56315aa7c0ff4a76e819163e3a9cff2548a8d39787caeca3364dae9f3e3462cf.scope: Deactivated successfully.
Dec  7 05:06:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:22.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:22 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:22 np0005549474 podman[261027]: 2025-12-07 10:06:22.591395601 +0000 UTC m=+0.040927351 container create 350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:06:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:22 np0005549474 systemd[1]: Started libpod-conmon-350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489.scope.
Dec  7 05:06:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:06:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4065740ca6e951ad36373e8386994846a409bf67ae7262b27f769b2d3f4be4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4065740ca6e951ad36373e8386994846a409bf67ae7262b27f769b2d3f4be4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4065740ca6e951ad36373e8386994846a409bf67ae7262b27f769b2d3f4be4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4065740ca6e951ad36373e8386994846a409bf67ae7262b27f769b2d3f4be4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:22 np0005549474 podman[261027]: 2025-12-07 10:06:22.573320506 +0000 UTC m=+0.022852236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:22 np0005549474 podman[261027]: 2025-12-07 10:06:22.677446193 +0000 UTC m=+0.126977963 container init 350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 05:06:22 np0005549474 podman[261027]: 2025-12-07 10:06:22.684111916 +0000 UTC m=+0.133643646 container start 350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mcnulty, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:06:22 np0005549474 podman[261027]: 2025-12-07 10:06:22.687485138 +0000 UTC m=+0.137016938 container attach 350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]: {
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:    "0": [
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:        {
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "devices": [
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "/dev/loop3"
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            ],
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "lv_name": "ceph_lv0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "lv_size": "21470642176",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "name": "ceph_lv0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "tags": {
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.cluster_name": "ceph",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.crush_device_class": "",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.encrypted": "0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.osd_id": "0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.type": "block",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.vdo": "0",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:                "ceph.with_tpm": "0"
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            },
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "type": "block",
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:            "vg_name": "ceph_vg0"
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:        }
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]:    ]
Dec  7 05:06:22 np0005549474 beautiful_mcnulty[261043]: }
Dec  7 05:06:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:22 np0005549474 systemd[1]: libpod-350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489.scope: Deactivated successfully.
Dec  7 05:06:22 np0005549474 podman[261027]: 2025-12-07 10:06:22.979308307 +0000 UTC m=+0.428840037 container died 350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:06:23 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7c4065740ca6e951ad36373e8386994846a409bf67ae7262b27f769b2d3f4be4-merged.mount: Deactivated successfully.
Dec  7 05:06:23 np0005549474 podman[261027]: 2025-12-07 10:06:23.01747226 +0000 UTC m=+0.467003980 container remove 350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:06:23 np0005549474 systemd[1]: libpod-conmon-350fd291f7ec9df001b842994e9558e8d238889c0486e8de53ed258fd023c489.scope: Deactivated successfully.
Dec  7 05:06:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:23.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:23 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.687409988 +0000 UTC m=+0.053746841 container create cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:06:23 np0005549474 systemd[1]: Started libpod-conmon-cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c.scope.
Dec  7 05:06:23 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.668584783 +0000 UTC m=+0.034921666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.778945731 +0000 UTC m=+0.145282614 container init cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.789634742 +0000 UTC m=+0.155971595 container start cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.794096455 +0000 UTC m=+0.160433338 container attach cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:06:23 np0005549474 compassionate_kare[261175]: 167 167
Dec  7 05:06:23 np0005549474 systemd[1]: libpod-cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c.scope: Deactivated successfully.
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.795894014 +0000 UTC m=+0.162230897 container died cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 05:06:23 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8b3728a15db17774eeaed10e3c26a843d1ee8541d19db02406899aa4f564c94c-merged.mount: Deactivated successfully.
Dec  7 05:06:23 np0005549474 podman[261158]: 2025-12-07 10:06:23.852862751 +0000 UTC m=+0.219199634 container remove cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 05:06:23 np0005549474 systemd[1]: libpod-conmon-cbcb3968a3d8b199d068ef617ec7cbf10ae8941839717fb9f79cef3f4278484c.scope: Deactivated successfully.
Dec  7 05:06:24 np0005549474 podman[261200]: 2025-12-07 10:06:24.060329634 +0000 UTC m=+0.053463323 container create 13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:06:24 np0005549474 systemd[1]: Started libpod-conmon-13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149.scope.
Dec  7 05:06:24 np0005549474 podman[261200]: 2025-12-07 10:06:24.03825713 +0000 UTC m=+0.031390829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:24 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:06:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927b9949f4f0a63e4780f5408e75f5ee311f055afe98e37ff4c56340fb45115b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927b9949f4f0a63e4780f5408e75f5ee311f055afe98e37ff4c56340fb45115b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927b9949f4f0a63e4780f5408e75f5ee311f055afe98e37ff4c56340fb45115b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:24 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927b9949f4f0a63e4780f5408e75f5ee311f055afe98e37ff4c56340fb45115b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:24 np0005549474 podman[261200]: 2025-12-07 10:06:24.16660549 +0000 UTC m=+0.159739189 container init 13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 05:06:24 np0005549474 podman[261200]: 2025-12-07 10:06:24.174742042 +0000 UTC m=+0.167875701 container start 13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 05:06:24 np0005549474 podman[261200]: 2025-12-07 10:06:24.17794105 +0000 UTC m=+0.171074709 container attach 13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:06:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:24 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ff0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:24 np0005549474 lvm[261296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:06:24 np0005549474 lvm[261296]: VG ceph_vg0 finished
Dec  7 05:06:24 np0005549474 podman[261289]: 2025-12-07 10:06:24.847877817 +0000 UTC m=+0.059108797 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  7 05:06:24 np0005549474 beautiful_hofstadter[261216]: {}
Dec  7 05:06:24 np0005549474 systemd[1]: libpod-13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149.scope: Deactivated successfully.
Dec  7 05:06:24 np0005549474 systemd[1]: libpod-13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149.scope: Consumed 1.132s CPU time.
Dec  7 05:06:24 np0005549474 podman[261312]: 2025-12-07 10:06:24.914121588 +0000 UTC m=+0.021956631 container died 13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 05:06:24 np0005549474 systemd[1]: var-lib-containers-storage-overlay-927b9949f4f0a63e4780f5408e75f5ee311f055afe98e37ff4c56340fb45115b-merged.mount: Deactivated successfully.
Dec  7 05:06:24 np0005549474 podman[261312]: 2025-12-07 10:06:24.947483581 +0000 UTC m=+0.055318604 container remove 13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:06:24 np0005549474 systemd[1]: libpod-conmon-13db20f6a2d9e60a52a2771bac129acd955538cb4dfecc05ab1205db6204a149.scope: Deactivated successfully.
Dec  7 05:06:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 05:06:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:06:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:06:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:25 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:06:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:26 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:26.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:26 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:27.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:06:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:27 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ff0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:06:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:06:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=404 latency=0.002000054s ======
Dec  7 05:06:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:27.905 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000054s
Dec  7 05:06:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.003000081s ======
Dec  7 05:06:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - - [07/Dec/2025:10:06:27.930 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.003000081s
Dec  7 05:06:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:28.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:28 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23cc004970 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:06:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:06:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:29 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23c0004830 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:29] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:29] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b8001ff0 fd 16 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:06:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:06:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:30.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:06:30 np0005549474 kernel: ganesha.nfsd[259501]: segfault at 50 ip 00007f2482ff032e sp 00007f24437fd210 error 4 in libntirpc.so.5.8[7f2482fd5000+2c000] likely on CPU 7 (core 0, socket 7)
Dec  7 05:06:30 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 05:06:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[257180]: 07/12/2025 10:06:30 : epoch 693550a4 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23b4004880 fd 16 proxy ignored for local
Dec  7 05:06:30 np0005549474 systemd[1]: Started Process Core Dump (PID 261357/UID 0).
Dec  7 05:06:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:31 np0005549474 systemd-coredump[261358]: Process 257184 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 65:#012#0  0x00007f2482ff032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 05:06:31 np0005549474 systemd[1]: systemd-coredump@8-261357-0.service: Deactivated successfully.
Dec  7 05:06:31 np0005549474 systemd[1]: systemd-coredump@8-261357-0.service: Consumed 1.195s CPU time.
Dec  7 05:06:31 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:06:31 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:06:31 np0005549474 podman[261365]: 2025-12-07 10:06:31.957993201 +0000 UTC m=+0.045808604 container died 1920ba545e62866db05b6f2df13b52cabfc62c05099718446d8fe8941487c369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  7 05:06:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-45167ca3e23a7ecac8f78cb0e84ada578221d767c3e2464240d335b390e51654-merged.mount: Deactivated successfully.
Dec  7 05:06:32 np0005549474 podman[261365]: 2025-12-07 10:06:32.005738956 +0000 UTC m=+0.093554309 container remove 1920ba545e62866db05b6f2df13b52cabfc62c05099718446d8fe8941487c369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:06:32 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 05:06:32 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 05:06:32 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.922s CPU time.
Dec  7 05:06:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:32.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:06:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec  7 05:06:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec  7 05:06:33 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec  7 05:06:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec  7 05:06:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec  7 05:06:34 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec  7 05:06:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:34.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Dec  7 05:06:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec  7 05:06:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec  7 05:06:35 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec  7 05:06:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Dec  7 05:06:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:37.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:06:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:37.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:06:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec  7 05:06:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec  7 05:06:37 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec  7 05:06:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100637 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:06:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:06:38.617 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:06:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:06:38.618 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:06:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:06:38.618 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:06:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 24 MiB data, 182 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.1 MiB/s wr, 42 op/s
Dec  7 05:06:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:39] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:39] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec  7 05:06:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Dec  7 05:06:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:41.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:06:42
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log']
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:06:42 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 9.
Dec  7 05:06:42 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 05:06:42 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.922s CPU time.
Dec  7 05:06:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:06:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:06:42 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 05:06:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:42.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:06:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec  7 05:06:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec  7 05:06:42 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec  7 05:06:42 np0005549474 podman[261490]: 2025-12-07 10:06:42.641822467 +0000 UTC m=+0.046285416 container create 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:06:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a8a7d6909f7cde947393cf67ae8ac732dfb99278fcf4edcfcfcebb1fc1e969/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a8a7d6909f7cde947393cf67ae8ac732dfb99278fcf4edcfcfcebb1fc1e969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a8a7d6909f7cde947393cf67ae8ac732dfb99278fcf4edcfcfcebb1fc1e969/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:42 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a8a7d6909f7cde947393cf67ae8ac732dfb99278fcf4edcfcfcebb1fc1e969/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:06:42 np0005549474 podman[261490]: 2025-12-07 10:06:42.705777496 +0000 UTC m=+0.110240435 container init 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 podman[261490]: 2025-12-07 10:06:42.712110979 +0000 UTC m=+0.116573908 container start 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 05:06:42 np0005549474 podman[261490]: 2025-12-07 10:06:42.617885083 +0000 UTC m=+0.022348032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:06:42 np0005549474 bash[261490]: 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 05:06:42 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 05:06:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:06:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.2 MiB/s wr, 36 op/s
Dec  7 05:06:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:44.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.1 MiB/s wr, 36 op/s
Dec  7 05:06:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:45.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 MiB/s wr, 29 op/s
Dec  7 05:06:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:47.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:06:47 np0005549474 podman[261554]: 2025-12-07 10:06:47.304222155 +0000 UTC m=+0.110416990 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  7 05:06:47 np0005549474 podman[261555]: 2025-12-07 10:06:47.341234428 +0000 UTC m=+0.146979311 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.625610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007625647, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 848, "num_deletes": 252, "total_data_size": 1299098, "memory_usage": 1322808, "flush_reason": "Manual Compaction"}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007636491, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 864013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22231, "largest_seqno": 23078, "table_properties": {"data_size": 860338, "index_size": 1391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9433, "raw_average_key_size": 20, "raw_value_size": 852531, "raw_average_value_size": 1837, "num_data_blocks": 61, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101945, "oldest_key_time": 1765101945, "file_creation_time": 1765102007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10939 microseconds, and 5295 cpu microseconds.
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.636547) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 864013 bytes OK
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.636569) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.638358) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.638381) EVENT_LOG_v1 {"time_micros": 1765102007638374, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.638403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1294962, prev total WAL file size 1294962, number of live WAL files 2.
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.639226) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(843KB)], [47(14MB)]
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007639298, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16305150, "oldest_snapshot_seqno": -1}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5526 keys, 12613908 bytes, temperature: kUnknown
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007778428, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12613908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12578059, "index_size": 20955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 139906, "raw_average_key_size": 25, "raw_value_size": 12479197, "raw_average_value_size": 2258, "num_data_blocks": 856, "num_entries": 5526, "num_filter_entries": 5526, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.778644) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12613908 bytes
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.780275) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.1 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(33.5) write-amplify(14.6) OK, records in: 6021, records dropped: 495 output_compression: NoCompression
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.780290) EVENT_LOG_v1 {"time_micros": 1765102007780283, "job": 24, "event": "compaction_finished", "compaction_time_micros": 139188, "compaction_time_cpu_micros": 28074, "output_level": 6, "num_output_files": 1, "total_output_size": 12613908, "num_input_records": 6021, "num_output_records": 5526, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007780535, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007783375, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.639107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.783454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.783459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.783461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.783463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:06:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:06:47.783465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:06:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100648 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:06:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:48.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:48 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:06:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:48 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:06:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:48 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:06:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Dec  7 05:06:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:49.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:49] "GET /metrics HTTP/1.1" 200 48325 "" "Prometheus/2.51.0"
Dec  7 05:06:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:49] "GET /metrics HTTP/1.1" 200 48325 "" "Prometheus/2.51.0"
Dec  7 05:06:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:50.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Dec  7 05:06:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:51.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:52.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 494 B/s wr, 2 op/s
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:52 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:53 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:53 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:53 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:06:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:53.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:53 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:53 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:06:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:53 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:06:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:54.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:06:54.471 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:06:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:06:54.472 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:06:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 05:06:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:55.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:55 np0005549474 podman[261613]: 2025-12-07 10:06:55.249433981 +0000 UTC m=+0.063691243 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:06:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:56.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 05:06:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:57.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:57.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:06:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:57.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:06:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:06:57.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:06:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:06:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:06:57 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:06:57.475 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:06:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:06:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:06:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:58.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:06:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Dec  7 05:06:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:06:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:06:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:06:59 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:06:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:59] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  7 05:06:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:06:59] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:00 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 852 B/s wr, 3 op/s
Dec  7 05:07:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:01 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:02 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:02 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec  7 05:07:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:03 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:07:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:03 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:07:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100703 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:07:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:03 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:04 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:04 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  7 05:07:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:05.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:05 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:06 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:07:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:06 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:06 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:07:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:07.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:07:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:07 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100708 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:07:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:08 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:08.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:08 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:07:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:09.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:09 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.595 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.596 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.625 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.754 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.755 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.765 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.765 256757 INFO nova.compute.claims [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  7 05:07:09 np0005549474 nova_compute[256753]: 2025-12-07 10:07:09.911 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:09] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  7 05:07:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:09] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec  7 05:07:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:07:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910571761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.403 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.410 256757 DEBUG nova.compute.provider_tree [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:07:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:10 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.432 256757 DEBUG nova.scheduler.client.report [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.456 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.457 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  7 05:07:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:10.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.517 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.517 256757 DEBUG nova.network.neutron [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.547 256757 INFO nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.575 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  7 05:07:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:10 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.708 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.710 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.710 256757 INFO nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Creating image(s)#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.746 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.782 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.818 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.822 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.824 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.827 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.827 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.850 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.852 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.852 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  7 05:07:10 np0005549474 nova_compute[256753]: 2025-12-07 10:07:10.866 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:07:11 np0005549474 nova_compute[256753]: 2025-12-07 10:07:11.105 256757 DEBUG nova.virt.libvirt.imagebackend [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image locations are: [{'url': 'rbd://75f4c9fd-539a-5e17-b55a-0a12a4e2736c/images/af7b5730-2fa9-449f-8ccb-a9519582f1b2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://75f4c9fd-539a-5e17-b55a-0a12a4e2736c/images/af7b5730-2fa9-449f-8ccb-a9519582f1b2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  7 05:07:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:11.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:11 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:11 np0005549474 nova_compute[256753]: 2025-12-07 10:07:11.247 256757 WARNING oslo_policy.policy [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  7 05:07:11 np0005549474 nova_compute[256753]: 2025-12-07 10:07:11.247 256757 WARNING oslo_policy.policy [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  7 05:07:11 np0005549474 nova_compute[256753]: 2025-12-07 10:07:11.251 256757 DEBUG nova.policy [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:07:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:07:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:07:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:12 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:12 np0005549474 nova_compute[256753]: 2025-12-07 10:07:12.418 256757 DEBUG nova.network.neutron [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Successfully created port: 231300d5-bcb5-4f0e-be76-d6422cfeb132 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:07:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:12.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:07:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:12 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:12 np0005549474 nova_compute[256753]: 2025-12-07 10:07:12.805 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.023 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.116 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.part --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.118 256757 DEBUG nova.virt.images [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] af7b5730-2fa9-449f-8ccb-a9519582f1b2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.120 256757 DEBUG nova.privsep.utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  7 05:07:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:13.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.122 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.part /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:13 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.356 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.part /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.converted" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.365 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.458 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.461 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.496 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.502 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b 85f56bb8-2b0e-4405-a313-156300c853e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.524 256757 DEBUG nova.network.neutron [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Successfully updated port: 231300d5-bcb5-4f0e-be76-d6422cfeb132 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.554 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.555 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.556 256757 DEBUG nova.network.neutron [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.763 256757 DEBUG nova.network.neutron [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.784 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.785 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.785 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:07:13 np0005549474 nova_compute[256753]: 2025-12-07 10:07:13.786 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.238 256757 DEBUG nova.compute.manager [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-changed-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.239 256757 DEBUG nova.compute.manager [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Refreshing instance network info cache due to event network-changed-231300d5-bcb5-4f0e-be76-d6422cfeb132. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.239 256757 DEBUG oslo_concurrency.lockutils [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:07:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:07:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592531399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.275 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:14 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.478 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.479 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4911MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.480 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.480 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:07:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:14.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.594 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Instance 85f56bb8-2b0e-4405-a313-156300c853e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.595 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.595 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:07:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec  7 05:07:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:14 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.651 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing inventories for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  7 05:07:14 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.667 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating ProviderTree inventory for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.668 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating inventory in ProviderTree for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.725 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing aggregate associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.752 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing trait associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  7 05:07:14 np0005549474 nova_compute[256753]: 2025-12-07 10:07:14.794 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 0 B/s wr, 8 op/s
Dec  7 05:07:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:07:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650379800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.225 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.234 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating inventory in ProviderTree for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  7 05:07:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:15 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.318 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updated inventory for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.319 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.319 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating inventory in ProviderTree for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.359 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.360 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Dec  7 05:07:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Dec  7 05:07:15 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Dec  7 05:07:15 np0005549474 nova_compute[256753]: 2025-12-07 10:07:15.925 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b 85f56bb8-2b0e-4405-a313-156300c853e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.030 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] resizing rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.161 256757 DEBUG nova.objects.instance [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'migration_context' on Instance uuid 85f56bb8-2b0e-4405-a313-156300c853e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.179 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.180 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Ensure instance console log exists: /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.180 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.180 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.181 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.356 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.356 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.357 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.357 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.382 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.383 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.384 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.384 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.385 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:07:16 np0005549474 nova_compute[256753]: 2025-12-07 10:07:16.385 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:07:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:16 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:16 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Dec  7 05:07:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:17.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:07:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:17.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:07:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:17.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:07:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:17 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.253 256757 DEBUG nova.network.neutron [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updating instance_info_cache with network_info: [{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.286 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.287 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Instance network_info: |[{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.287 256757 DEBUG oslo_concurrency.lockutils [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.288 256757 DEBUG nova.network.neutron [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Refreshing network info cache for port 231300d5-bcb5-4f0e-be76-d6422cfeb132 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.294 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Start _get_guest_xml network_info=[{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': 'af7b5730-2fa9-449f-8ccb-a9519582f1b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.302 256757 WARNING nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.307 256757 DEBUG nova.virt.libvirt.host [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.308 256757 DEBUG nova.virt.libvirt.host [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.318 256757 DEBUG nova.virt.libvirt.host [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.319 256757 DEBUG nova.virt.libvirt.host [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.320 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.321 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-07T10:06:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bc1a767b-c985-4370-b41e-5cb294d603d7',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.322 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.322 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.323 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.323 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.324 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.324 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.325 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.325 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.326 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.326 256757 DEBUG nova.virt.hardware [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.332 256757 DEBUG nova.privsep.utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.333 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:07:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929662140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.841 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.868 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:17 np0005549474 nova_compute[256753]: 2025-12-07 10:07:17.873 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:18 np0005549474 podman[262003]: 2025-12-07 10:07:18.247594273 +0000 UTC m=+0.063712294 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:07:18 np0005549474 podman[262004]: 2025-12-07 10:07:18.305002753 +0000 UTC m=+0.110003450 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:07:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:07:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103610747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.330 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.332 256757 DEBUG nova.virt.libvirt.vif [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2095719119',display_name='tempest-TestNetworkBasicOps-server-2095719119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2095719119',id=1,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsVHMS8iocA8+Rh3fh2+y9lSS5qiLX7I8VOl9BfUUw2+sXQOsdN/jr824ramDTfkJWrUKjydtUaUwdlfo7Pw0CklT8ylELWbhX5dNUZiOWRtp5EZtMKgO29c1zzSh9SNA==',key_name='tempest-TestNetworkBasicOps-1528391493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-hoqfzoha',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:07:10Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=85f56bb8-2b0e-4405-a313-156300c853e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.332 256757 DEBUG nova.network.os_vif_util [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.333 256757 DEBUG nova.network.os_vif_util [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.335 256757 DEBUG nova.objects.instance [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85f56bb8-2b0e-4405-a313-156300c853e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:07:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:18 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.452 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] End _get_guest_xml xml=<domain type="kvm">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <uuid>85f56bb8-2b0e-4405-a313-156300c853e4</uuid>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <name>instance-00000001</name>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <memory>131072</memory>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <vcpu>1</vcpu>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:name>tempest-TestNetworkBasicOps-server-2095719119</nova:name>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:creationTime>2025-12-07 10:07:17</nova:creationTime>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:flavor name="m1.nano">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:memory>128</nova:memory>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:disk>1</nova:disk>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:swap>0</nova:swap>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:vcpus>1</nova:vcpus>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </nova:flavor>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:owner>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </nova:owner>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <nova:ports>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <nova:port uuid="231300d5-bcb5-4f0e-be76-d6422cfeb132">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        </nova:port>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </nova:ports>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </nova:instance>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <sysinfo type="smbios">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <entry name="manufacturer">RDO</entry>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <entry name="product">OpenStack Compute</entry>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <entry name="serial">85f56bb8-2b0e-4405-a313-156300c853e4</entry>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <entry name="uuid">85f56bb8-2b0e-4405-a313-156300c853e4</entry>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <entry name="family">Virtual Machine</entry>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <boot dev="hd"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <smbios mode="sysinfo"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <vmcoreinfo/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <clock offset="utc">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <timer name="pit" tickpolicy="delay"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <timer name="hpet" present="no"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <cpu mode="host-model" match="exact">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <topology sockets="1" cores="1" threads="1"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <disk type="network" device="disk">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/85f56bb8-2b0e-4405-a313-156300c853e4_disk">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <target dev="vda" bus="virtio"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <disk type="network" device="cdrom">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/85f56bb8-2b0e-4405-a313-156300c853e4_disk.config">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <target dev="sda" bus="sata"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <interface type="ethernet">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <mac address="fa:16:3e:28:6a:e2"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <mtu size="1442"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <target dev="tap231300d5-bc"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <serial type="pty">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <log file="/var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/console.log" append="off"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <input type="tablet" bus="usb"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <rng model="virtio">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <backend model="random">/dev/urandom</backend>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <controller type="usb" index="0"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    <memballoon model="virtio">
Dec  7 05:07:18 np0005549474 nova_compute[256753]:      <stats period="10"/>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:07:18 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:07:18 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:07:18 np0005549474 nova_compute[256753]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.454 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Preparing to wait for external event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.455 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.455 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.456 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.457 256757 DEBUG nova.virt.libvirt.vif [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2095719119',display_name='tempest-TestNetworkBasicOps-server-2095719119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2095719119',id=1,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsVHMS8iocA8+Rh3fh2+y9lSS5qiLX7I8VOl9BfUUw2+sXQOsdN/jr824ramDTfkJWrUKjydtUaUwdlfo7Pw0CklT8ylELWbhX5dNUZiOWRtp5EZtMKgO29c1zzSh9SNA==',key_name='tempest-TestNetworkBasicOps-1528391493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-hoqfzoha',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:07:10Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=85f56bb8-2b0e-4405-a313-156300c853e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.458 256757 DEBUG nova.network.os_vif_util [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.459 256757 DEBUG nova.network.os_vif_util [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.460 256757 DEBUG os_vif [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:07:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.525 256757 DEBUG ovsdbapp.backend.ovs_idl [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.526 256757 DEBUG ovsdbapp.backend.ovs_idl [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.526 256757 DEBUG ovsdbapp.backend.ovs_idl [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.527 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.528 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.528 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.531 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.533 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.536 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.551 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.552 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.552 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:07:18 np0005549474 nova_compute[256753]: 2025-12-07 10:07:18.553 256757 INFO oslo.privsep.daemon [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2iw09pl2/privsep.sock']#033[00m
Dec  7 05:07:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:18 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 87 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 48 op/s
Dec  7 05:07:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.218 256757 INFO oslo.privsep.daemon [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.115 262056 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.121 262056 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.124 262056 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.125 262056 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262056#033[00m
Dec  7 05:07:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:19 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.508 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.509 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap231300d5-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.510 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap231300d5-bc, col_values=(('external_ids', {'iface-id': '231300d5-bcb5-4f0e-be76-d6422cfeb132', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:6a:e2', 'vm-uuid': '85f56bb8-2b0e-4405-a313-156300c853e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.523 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:19 np0005549474 NetworkManager[49051]: <info>  [1765102039.5250] manager: (tap231300d5-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.528 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.533 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.534 256757 INFO os_vif [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc')#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.947 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.947 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.947 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:28:6a:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.948 256757 INFO nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Using config drive#033[00m
Dec  7 05:07:19 np0005549474 nova_compute[256753]: 2025-12-07 10:07:19.981 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:19] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Dec  7 05:07:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:19] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.278 256757 DEBUG nova.network.neutron [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updated VIF entry in instance network info cache for port 231300d5-bcb5-4f0e-be76-d6422cfeb132. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.279 256757 DEBUG nova.network.neutron [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updating instance_info_cache with network_info: [{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.292 256757 DEBUG oslo_concurrency.lockutils [req-c023a766-fa5a-4786-a45d-2186767bb5d3 req-003a10c6-1c36-4af3-a6c3-910a1333719f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:07:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:20 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:20.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.524 256757 INFO nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Creating config drive at /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/disk.config#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.533 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw57tb32g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:20 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.673 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw57tb32g" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.717 256757 DEBUG nova.storage.rbd_utils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 85f56bb8-2b0e-4405-a313-156300c853e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.722 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/disk.config 85f56bb8-2b0e-4405-a313-156300c853e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.929 256757 DEBUG oslo_concurrency.processutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/disk.config 85f56bb8-2b0e-4405-a313-156300c853e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:07:20 np0005549474 nova_compute[256753]: 2025-12-07 10:07:20.931 256757 INFO nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Deleting local config drive /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4/disk.config because it was imported into RBD.#033[00m
Dec  7 05:07:20 np0005549474 systemd[1]: Starting libvirt secret daemon...
Dec  7 05:07:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Dec  7 05:07:20 np0005549474 systemd[1]: Started libvirt secret daemon.
Dec  7 05:07:21 np0005549474 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  7 05:07:21 np0005549474 kernel: tap231300d5-bc: entered promiscuous mode
Dec  7 05:07:21 np0005549474 NetworkManager[49051]: <info>  [1765102041.0800] manager: (tap231300d5-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec  7 05:07:21 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:21Z|00027|binding|INFO|Claiming lport 231300d5-bcb5-4f0e-be76-d6422cfeb132 for this chassis.
Dec  7 05:07:21 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:21Z|00028|binding|INFO|231300d5-bcb5-4f0e-be76-d6422cfeb132: Claiming fa:16:3e:28:6a:e2 10.100.0.11
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.084 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.093 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.110 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:6a:e2 10.100.0.11'], port_security=['fa:16:3e:28:6a:e2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '85f56bb8-2b0e-4405-a313-156300c853e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71f2a529-e890-4416-bb37-8ebbeaaf7d18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3afdebe-ce17-484e-8cc0-e268e6f58f98, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=231300d5-bcb5-4f0e-be76-d6422cfeb132) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.112 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 231300d5-bcb5-4f0e-be76-d6422cfeb132 in datapath ba5590d7-ace7-4d21-97d3-6f4299ad21a1 bound to our chassis#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.116 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba5590d7-ace7-4d21-97d3-6f4299ad21a1#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.117 164143 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpdd_2mu1v/privsep.sock']#033[00m
Dec  7 05:07:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:21.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:21 np0005549474 systemd-udevd[262157]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:07:21 np0005549474 systemd-machined[217882]: New machine qemu-1-instance-00000001.
Dec  7 05:07:21 np0005549474 NetworkManager[49051]: <info>  [1765102041.1671] device (tap231300d5-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:07:21 np0005549474 NetworkManager[49051]: <info>  [1765102041.1677] device (tap231300d5-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.201 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:21 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:21Z|00029|binding|INFO|Setting lport 231300d5-bcb5-4f0e-be76-d6422cfeb132 ovn-installed in OVS
Dec  7 05:07:21 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:21Z|00030|binding|INFO|Setting lport 231300d5-bcb5-4f0e-be76-d6422cfeb132 up in Southbound
Dec  7 05:07:21 np0005549474 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.206 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:21 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.792 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102041.7914023, 85f56bb8-2b0e-4405-a313-156300c853e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.793 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] VM Started (Lifecycle Event)#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.802 164143 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.803 164143 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdd_2mu1v/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.690 262215 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.697 262215 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.701 262215 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.701 262215 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262215#033[00m
Dec  7 05:07:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:21.806 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[4d47b106-3495-4ca4-89d1-3e63e7b149ac]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.866 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.871 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102041.791553, 85f56bb8-2b0e-4405-a313-156300c853e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.871 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] VM Paused (Lifecycle Event)#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.905 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.909 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:07:21 np0005549474 nova_compute[256753]: 2025-12-07 10:07:21.932 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:07:22 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:22.353 262215 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:22 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:22.353 262215 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:22 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:22.353 262215 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:22 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.475 256757 DEBUG nova.compute.manager [req-f54c480a-6a1c-4a5b-96ac-d839ec3f1518 req-d53ef97a-13bb-4675-a79f-2a460bf28687 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.475 256757 DEBUG oslo_concurrency.lockutils [req-f54c480a-6a1c-4a5b-96ac-d839ec3f1518 req-d53ef97a-13bb-4675-a79f-2a460bf28687 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.475 256757 DEBUG oslo_concurrency.lockutils [req-f54c480a-6a1c-4a5b-96ac-d839ec3f1518 req-d53ef97a-13bb-4675-a79f-2a460bf28687 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.476 256757 DEBUG oslo_concurrency.lockutils [req-f54c480a-6a1c-4a5b-96ac-d839ec3f1518 req-d53ef97a-13bb-4675-a79f-2a460bf28687 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.476 256757 DEBUG nova.compute.manager [req-f54c480a-6a1c-4a5b-96ac-d839ec3f1518 req-d53ef97a-13bb-4675-a79f-2a460bf28687 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Processing event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.476 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.481 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102042.481007, 85f56bb8-2b0e-4405-a313-156300c853e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.482 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] VM Resumed (Lifecycle Event)#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.485 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.490 256757 INFO nova.virt.libvirt.driver [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Instance spawned successfully.#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.490 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.504 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:07:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:22.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.535 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.555 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.571 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.571 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.571 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.572 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.572 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.573 256757 DEBUG nova.virt.libvirt.driver [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.618 256757 INFO nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Took 11.91 seconds to spawn the instance on the hypervisor.#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.619 256757 DEBUG nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:07:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:22 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Dec  7 05:07:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Dec  7 05:07:22 np0005549474 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.690 256757 INFO nova.compute.manager [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Took 12.98 seconds to build instance.#033[00m
Dec  7 05:07:22 np0005549474 nova_compute[256753]: 2025-12-07 10:07:22.707 256757 DEBUG oslo_concurrency.lockutils [None req-942a50c8-1574-4e08-8d7e-69ab2eba82ff 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.012 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5054ac-27be-4b88-b5ba-ead02d220729]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.014 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba5590d7-a1 in ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.016 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba5590d7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.016 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[c129a188-6278-4ed6-997e-c44381baf560]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.019 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5fab49-54b0-43a8-9428-e8f96980d2a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.063 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[11061763-ca77-4888-b671-3b16d110e467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.097 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[b030eeeb-8e47-4eb0-a05f-859ff1443ef6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.101 164143 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6xrf4f3a/privsep.sock']#033[00m
Dec  7 05:07:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:23 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.783 164143 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.785 164143 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6xrf4f3a/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.687 262259 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.690 262259 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.692 262259 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.692 262259 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262259#033[00m
Dec  7 05:07:23 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:23.790 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[4b37ffdb-d776-4566-8292-3014c8e7da90]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.306 262259 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.306 262259 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.307 262259 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:24 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.558 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.584 256757 DEBUG nova.compute.manager [req-ed27e588-1ea0-4f8b-a902-4428fc23c432 req-170e7759-2dde-4cb8-9a0c-293e4c7d89d3 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.585 256757 DEBUG oslo_concurrency.lockutils [req-ed27e588-1ea0-4f8b-a902-4428fc23c432 req-170e7759-2dde-4cb8-9a0c-293e4c7d89d3 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.585 256757 DEBUG oslo_concurrency.lockutils [req-ed27e588-1ea0-4f8b-a902-4428fc23c432 req-170e7759-2dde-4cb8-9a0c-293e4c7d89d3 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.585 256757 DEBUG oslo_concurrency.lockutils [req-ed27e588-1ea0-4f8b-a902-4428fc23c432 req-170e7759-2dde-4cb8-9a0c-293e4c7d89d3 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.585 256757 DEBUG nova.compute.manager [req-ed27e588-1ea0-4f8b-a902-4428fc23c432 req-170e7759-2dde-4cb8-9a0c-293e4c7d89d3 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] No waiting events found dispatching network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:07:24 np0005549474 nova_compute[256753]: 2025-12-07 10:07:24.586 256757 WARNING nova.compute.manager [req-ed27e588-1ea0-4f8b-a902-4428fc23c432 req-170e7759-2dde-4cb8-9a0c-293e4c7d89d3 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received unexpected event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:07:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:24 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.918 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[01872c41-472c-42a8-ac5d-d0e76f69a7f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.945 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[be358de7-0134-477f-8c15-4d513c55f819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:24 np0005549474 NetworkManager[49051]: <info>  [1765102044.9474] manager: (tapba5590d7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec  7 05:07:24 np0005549474 systemd-udevd[262272]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.975 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[4befd620-9397-47f5-804b-8483843ec264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:24.978 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[81a5b36d-d7f1-471e-92e0-bc885419e818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 89 op/s
Dec  7 05:07:25 np0005549474 NetworkManager[49051]: <info>  [1765102045.0070] device (tapba5590d7-a0): carrier: link connected
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.015 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[39c2296a-4a1d-4816-b3f6-31499b82579f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.042 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[4e70c608-86f2-46d5-bb15-a6be2d437e65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba5590d7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:1a:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400766, 'reachable_time': 38891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262290, 'error': None, 'target': 'ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.068 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[005457df-4a1a-44bd-8416-6d16a1fc5537]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:1a79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400766, 'tstamp': 400766}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262291, 'error': None, 'target': 'ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.093 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[059a9399-677f-4af5-8c6c-2327b305ad44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba5590d7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:1a:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400766, 'reachable_time': 38891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262292, 'error': None, 'target': 'ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.132 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[1144e40a-05bf-41fa-bc4e-37151627bfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:25.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.201 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[77d5a7ea-9b5c-4858-abc5-4486b78caa49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.204 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba5590d7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.205 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.205 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba5590d7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:07:25 np0005549474 kernel: tapba5590d7-a0: entered promiscuous mode
Dec  7 05:07:25 np0005549474 nova_compute[256753]: 2025-12-07 10:07:25.207 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:25 np0005549474 NetworkManager[49051]: <info>  [1765102045.2077] manager: (tapba5590d7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  7 05:07:25 np0005549474 nova_compute[256753]: 2025-12-07 10:07:25.210 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.211 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba5590d7-a0, col_values=(('external_ids', {'iface-id': 'c29113f5-93e1-45cf-a1b5-872e1cb341ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:07:25 np0005549474 nova_compute[256753]: 2025-12-07 10:07:25.211 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:25 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:25Z|00031|binding|INFO|Releasing lport c29113f5-93e1-45cf-a1b5-872e1cb341ba from this chassis (sb_readonly=0)
Dec  7 05:07:25 np0005549474 nova_compute[256753]: 2025-12-07 10:07:25.241 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:25 np0005549474 nova_compute[256753]: 2025-12-07 10:07:25.242 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.243 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba5590d7-ace7-4d21-97d3-6f4299ad21a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba5590d7-ace7-4d21-97d3-6f4299ad21a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.244 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[aee7c40e-00b2-46cc-a083-6b53e0b9047b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.245 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-ba5590d7-ace7-4d21-97d3-6f4299ad21a1
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/ba5590d7-ace7-4d21-97d3-6f4299ad21a1.pid.haproxy
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID ba5590d7-ace7-4d21-97d3-6f4299ad21a1
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:07:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:25.248 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'env', 'PROCESS_TAG=haproxy-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba5590d7-ace7-4d21-97d3-6f4299ad21a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:07:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:25 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:25 np0005549474 podman[262328]: 2025-12-07 10:07:25.546707835 +0000 UTC m=+0.071746713 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:07:25 np0005549474 podman[262394]: 2025-12-07 10:07:25.661679118 +0000 UTC m=+0.055789576 container create 0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:07:25 np0005549474 systemd[1]: Started libpod-conmon-0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79.scope.
Dec  7 05:07:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e31e450c0f06cc2be77e780b42a3d48c6dcbcae8840a53ca1e277b3f0124fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:25 np0005549474 podman[262394]: 2025-12-07 10:07:25.630887626 +0000 UTC m=+0.024998104 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:07:25 np0005549474 podman[262394]: 2025-12-07 10:07:25.737281125 +0000 UTC m=+0.131391653 container init 0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  7 05:07:25 np0005549474 podman[262394]: 2025-12-07 10:07:25.745631954 +0000 UTC m=+0.139742442 container start 0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:07:25 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [NOTICE]   (262437) : New worker (262439) forked
Dec  7 05:07:25 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [NOTICE]   (262437) : Loading success.
Dec  7 05:07:26 np0005549474 podman[262494]: 2025-12-07 10:07:26.109533283 +0000 UTC m=+0.059661173 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:07:26 np0005549474 podman[262494]: 2025-12-07 10:07:26.206986857 +0000 UTC m=+0.157114717 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 05:07:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:26 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:26.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:26 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:26 np0005549474 podman[262612]: 2025-12-07 10:07:26.70666623 +0000 UTC m=+0.048984961 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:07:26 np0005549474 podman[262612]: 2025-12-07 10:07:26.714472314 +0000 UTC m=+0.056791055 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:07:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec  7 05:07:27 np0005549474 podman[262706]: 2025-12-07 10:07:27.023598125 +0000 UTC m=+0.054409949 container exec 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 05:07:27 np0005549474 podman[262706]: 2025-12-07 10:07:27.0354823 +0000 UTC m=+0.066294094 container exec_died 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:07:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:27.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:07:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:27.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:07:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:27.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:27 np0005549474 podman[262773]: 2025-12-07 10:07:27.247476907 +0000 UTC m=+0.061886413 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 05:07:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:27 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:27 np0005549474 podman[262773]: 2025-12-07 10:07:27.261440738 +0000 UTC m=+0.075850154 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 05:07:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:07:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:07:27 np0005549474 podman[262842]: 2025-12-07 10:07:27.527415251 +0000 UTC m=+0.063991481 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph.)
Dec  7 05:07:27 np0005549474 podman[262842]: 2025-12-07 10:07:27.543668205 +0000 UTC m=+0.080244425 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4)
Dec  7 05:07:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:27 np0005549474 podman[262904]: 2025-12-07 10:07:27.856882108 +0000 UTC m=+0.077988733 container exec d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:07:27 np0005549474 podman[262904]: 2025-12-07 10:07:27.911747169 +0000 UTC m=+0.132853754 container exec_died d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:07:28 np0005549474 podman[262980]: 2025-12-07 10:07:28.246066849 +0000 UTC m=+0.096738175 container exec d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 05:07:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:28 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:28 np0005549474 podman[262980]: 2025-12-07 10:07:28.439858988 +0000 UTC m=+0.290530304 container exec_died d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 05:07:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:28 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:28 np0005549474 podman[263089]: 2025-12-07 10:07:28.96916396 +0000 UTC m=+0.081580771 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:07:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 90 op/s
Dec  7 05:07:29 np0005549474 podman[263089]: 2025-12-07 10:07:29.038547858 +0000 UTC m=+0.150964639 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:29 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:29 np0005549474 nova_compute[256753]: 2025-12-07 10:07:29.559 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:29 np0005549474 nova_compute[256753]: 2025-12-07 10:07:29.561 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:29 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:29Z|00032|binding|INFO|Releasing lport c29113f5-93e1-45cf-a1b5-872e1cb341ba from this chassis (sb_readonly=0)
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6021] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Dec  7 05:07:29 np0005549474 nova_compute[256753]: 2025-12-07 10:07:29.603 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6067] device (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6089] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6096] device (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6114] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6126] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6134] device (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  7 05:07:29 np0005549474 NetworkManager[49051]: <info>  [1765102049.6141] device (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  7 05:07:29 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:29Z|00033|binding|INFO|Releasing lport c29113f5-93e1-45cf-a1b5-872e1cb341ba from this chassis (sb_readonly=0)
Dec  7 05:07:29 np0005549474 nova_compute[256753]: 2025-12-07 10:07:29.636 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:29 np0005549474 nova_compute[256753]: 2025-12-07 10:07:29.640 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:07:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:29] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:07:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:29] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:07:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:07:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:07:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:07:30 np0005549474 nova_compute[256753]: 2025-12-07 10:07:30.397 256757 DEBUG nova.compute.manager [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-changed-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:07:30 np0005549474 nova_compute[256753]: 2025-12-07 10:07:30.397 256757 DEBUG nova.compute.manager [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Refreshing instance network info cache due to event network-changed-231300d5-bcb5-4f0e-be76-d6422cfeb132. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:07:30 np0005549474 nova_compute[256753]: 2025-12-07 10:07:30.398 256757 DEBUG oslo_concurrency.lockutils [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:07:30 np0005549474 nova_compute[256753]: 2025-12-07 10:07:30.398 256757 DEBUG oslo_concurrency.lockutils [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:07:30 np0005549474 nova_compute[256753]: 2025-12-07 10:07:30.398 256757 DEBUG nova.network.neutron [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Refreshing network info cache for port 231300d5-bcb5-4f0e-be76-d6422cfeb132 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:07:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:30 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:30.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:30 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.678468126 +0000 UTC m=+0.055913050 container create cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cohen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:07:30 np0005549474 systemd[1]: Started libpod-conmon-cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713.scope.
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.65558681 +0000 UTC m=+0.033031804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:07:30 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.774289267 +0000 UTC m=+0.151734261 container init cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cohen, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.784469635 +0000 UTC m=+0.161914579 container start cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.788601428 +0000 UTC m=+0.166046372 container attach cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cohen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:07:30 np0005549474 festive_cohen[263326]: 167 167
Dec  7 05:07:30 np0005549474 systemd[1]: libpod-cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713.scope: Deactivated successfully.
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.793588895 +0000 UTC m=+0.171033839 container died cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:07:30 np0005549474 systemd[1]: var-lib-containers-storage-overlay-dc60d009de8179d73cc9c5a5381c27e1efe7e298f5e0abcd0eedd0c5c9f2e18e-merged.mount: Deactivated successfully.
Dec  7 05:07:30 np0005549474 podman[263309]: 2025-12-07 10:07:30.863490536 +0000 UTC m=+0.240935460 container remove cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cohen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 05:07:30 np0005549474 systemd[1]: libpod-conmon-cec1790944c499d42c9fbdf920939050aa26d8da4c683ffcdbd5448f8b3f5713.scope: Deactivated successfully.
Dec  7 05:07:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.063870524 +0000 UTC m=+0.045778822 container create bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:07:31 np0005549474 systemd[1]: Started libpod-conmon-bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3.scope.
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.041303647 +0000 UTC m=+0.023211985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:07:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2381a1b8129737ed4d08f92ac09070472d1bdc0bb5e0e7db467ba9b2fe876286/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2381a1b8129737ed4d08f92ac09070472d1bdc0bb5e0e7db467ba9b2fe876286/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2381a1b8129737ed4d08f92ac09070472d1bdc0bb5e0e7db467ba9b2fe876286/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2381a1b8129737ed4d08f92ac09070472d1bdc0bb5e0e7db467ba9b2fe876286/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2381a1b8129737ed4d08f92ac09070472d1bdc0bb5e0e7db467ba9b2fe876286/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:31.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.157397592 +0000 UTC m=+0.139305900 container init bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.172976398 +0000 UTC m=+0.154884726 container start bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.176639977 +0000 UTC m=+0.158548285 container attach bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:07:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:31 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:31 np0005549474 nostalgic_kapitsa[263370]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:07:31 np0005549474 nostalgic_kapitsa[263370]: --> All data devices are unavailable
Dec  7 05:07:31 np0005549474 systemd[1]: libpod-bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3.scope: Deactivated successfully.
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.598593655 +0000 UTC m=+0.580501953 container died bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 05:07:31 np0005549474 nova_compute[256753]: 2025-12-07 10:07:31.609 256757 DEBUG nova.network.neutron [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updated VIF entry in instance network info cache for port 231300d5-bcb5-4f0e-be76-d6422cfeb132. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:07:31 np0005549474 nova_compute[256753]: 2025-12-07 10:07:31.613 256757 DEBUG nova.network.neutron [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updating instance_info_cache with network_info: [{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:07:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2381a1b8129737ed4d08f92ac09070472d1bdc0bb5e0e7db467ba9b2fe876286-merged.mount: Deactivated successfully.
Dec  7 05:07:31 np0005549474 nova_compute[256753]: 2025-12-07 10:07:31.638 256757 DEBUG oslo_concurrency.lockutils [req-f442063f-ffb2-4924-8ef4-4777a07b7bfa req-839530b2-2c91-4440-8e89-046887010a26 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:07:31 np0005549474 podman[263354]: 2025-12-07 10:07:31.66240777 +0000 UTC m=+0.644316068 container remove bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 05:07:31 np0005549474 systemd[1]: libpod-conmon-bffa0c3aa4de024bbdedc1b5a4ab78d016aff88ef9ab3eb73c159c79ef62e1c3.scope: Deactivated successfully.
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.355458019 +0000 UTC m=+0.069497681 container create a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 05:07:32 np0005549474 systemd[1]: Started libpod-conmon-a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e.scope.
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.326557569 +0000 UTC m=+0.040597281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:07:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:32 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.452006419 +0000 UTC m=+0.166046121 container init a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_greider, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.46299084 +0000 UTC m=+0.177030492 container start a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.467501802 +0000 UTC m=+0.181541504 container attach a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:07:32 np0005549474 hardcore_greider[263513]: 167 167
Dec  7 05:07:32 np0005549474 systemd[1]: libpod-a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e.scope: Deactivated successfully.
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.47214029 +0000 UTC m=+0.186179912 container died a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_greider, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Dec  7 05:07:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-21a9a17c2abe5c73312f94937a192ca89ac931c526886b9265ef8ce0275cf593-merged.mount: Deactivated successfully.
Dec  7 05:07:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:32 np0005549474 podman[263496]: 2025-12-07 10:07:32.526222368 +0000 UTC m=+0.240262000 container remove a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 05:07:32 np0005549474 systemd[1]: libpod-conmon-a1627ae4f7e82d767822f5110008f7bb79eb1d88c81f37027184f84f3ac8b55e.scope: Deactivated successfully.
Dec  7 05:07:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:32 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:32 np0005549474 podman[263538]: 2025-12-07 10:07:32.77207247 +0000 UTC m=+0.067035824 container create 9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:07:32 np0005549474 systemd[1]: Started libpod-conmon-9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457.scope.
Dec  7 05:07:32 np0005549474 podman[263538]: 2025-12-07 10:07:32.753763059 +0000 UTC m=+0.048726433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:07:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79a75916ccf426d6ff9276ea1494389ebd1725090da579a57bbf4e6306d459/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79a75916ccf426d6ff9276ea1494389ebd1725090da579a57bbf4e6306d459/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79a75916ccf426d6ff9276ea1494389ebd1725090da579a57bbf4e6306d459/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79a75916ccf426d6ff9276ea1494389ebd1725090da579a57bbf4e6306d459/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:32 np0005549474 podman[263538]: 2025-12-07 10:07:32.888650148 +0000 UTC m=+0.183613572 container init 9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:07:32 np0005549474 podman[263538]: 2025-12-07 10:07:32.900486901 +0000 UTC m=+0.195450285 container start 9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:07:32 np0005549474 podman[263538]: 2025-12-07 10:07:32.904697006 +0000 UTC m=+0.199660370 container attach 9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:07:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Dec  7 05:07:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]: {
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:    "0": [
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:        {
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "devices": [
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "/dev/loop3"
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            ],
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "lv_name": "ceph_lv0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "lv_size": "21470642176",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "name": "ceph_lv0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "tags": {
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.cluster_name": "ceph",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.crush_device_class": "",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.encrypted": "0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.osd_id": "0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.type": "block",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.vdo": "0",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:                "ceph.with_tpm": "0"
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            },
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "type": "block",
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:            "vg_name": "ceph_vg0"
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:        }
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]:    ]
Dec  7 05:07:33 np0005549474 mystifying_kare[263553]: }
Dec  7 05:07:33 np0005549474 systemd[1]: libpod-9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457.scope: Deactivated successfully.
Dec  7 05:07:33 np0005549474 podman[263538]: 2025-12-07 10:07:33.213948072 +0000 UTC m=+0.508911436 container died 9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 05:07:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6e79a75916ccf426d6ff9276ea1494389ebd1725090da579a57bbf4e6306d459-merged.mount: Deactivated successfully.
Dec  7 05:07:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:33 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:33 np0005549474 podman[263538]: 2025-12-07 10:07:33.274346463 +0000 UTC m=+0.569309847 container remove 9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 05:07:33 np0005549474 systemd[1]: libpod-conmon-9acc9d1a0e1536961df876970274693054da59262d1d2405cbcc2406e962b457.scope: Deactivated successfully.
Dec  7 05:07:33 np0005549474 podman[263670]: 2025-12-07 10:07:33.960389761 +0000 UTC m=+0.053099053 container create eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 05:07:34 np0005549474 systemd[1]: Started libpod-conmon-eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07.scope.
Dec  7 05:07:34 np0005549474 podman[263670]: 2025-12-07 10:07:33.933975559 +0000 UTC m=+0.026684901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:07:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:34 np0005549474 podman[263670]: 2025-12-07 10:07:34.073049121 +0000 UTC m=+0.165758413 container init eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:07:34 np0005549474 podman[263670]: 2025-12-07 10:07:34.084628447 +0000 UTC m=+0.177337729 container start eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_leakey, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 05:07:34 np0005549474 podman[263670]: 2025-12-07 10:07:34.088651518 +0000 UTC m=+0.181360860 container attach eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 05:07:34 np0005549474 musing_leakey[263688]: 167 167
Dec  7 05:07:34 np0005549474 systemd[1]: libpod-eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07.scope: Deactivated successfully.
Dec  7 05:07:34 np0005549474 podman[263670]: 2025-12-07 10:07:34.091589818 +0000 UTC m=+0.184299130 container died eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:07:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a4ddb14984f6c88ef590a85d750730ac78f3dd6aaf89059d6865e6fbd40a937a-merged.mount: Deactivated successfully.
Dec  7 05:07:34 np0005549474 podman[263670]: 2025-12-07 10:07:34.14214767 +0000 UTC m=+0.234856922 container remove eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:07:34 np0005549474 systemd[1]: libpod-conmon-eb626b51181b83b9d03e83674fcaf5dad40bb01e258555097649bcbbbf01ee07.scope: Deactivated successfully.
Dec  7 05:07:34 np0005549474 podman[263714]: 2025-12-07 10:07:34.399534778 +0000 UTC m=+0.073799319 container create 45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:07:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:34 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:34 np0005549474 systemd[1]: Started libpod-conmon-45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d.scope.
Dec  7 05:07:34 np0005549474 podman[263714]: 2025-12-07 10:07:34.374774261 +0000 UTC m=+0.049038802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:07:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:07:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff1d6df008126a9365dac2bbd95feacf1f6a41e9a3a3670b48e02e366b1b255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff1d6df008126a9365dac2bbd95feacf1f6a41e9a3a3670b48e02e366b1b255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff1d6df008126a9365dac2bbd95feacf1f6a41e9a3a3670b48e02e366b1b255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff1d6df008126a9365dac2bbd95feacf1f6a41e9a3a3670b48e02e366b1b255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:07:34 np0005549474 podman[263714]: 2025-12-07 10:07:34.485825617 +0000 UTC m=+0.160090158 container init 45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kowalevski, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 05:07:34 np0005549474 podman[263714]: 2025-12-07 10:07:34.499691587 +0000 UTC m=+0.173956108 container start 45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:07:34 np0005549474 podman[263714]: 2025-12-07 10:07:34.503787808 +0000 UTC m=+0.178052319 container attach 45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kowalevski, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 05:07:34 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  7 05:07:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:34 np0005549474 nova_compute[256753]: 2025-12-07 10:07:34.615 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:34 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 95 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 500 KiB/s wr, 79 op/s
Dec  7 05:07:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:35 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:35Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:6a:e2 10.100.0.11
Dec  7 05:07:35 np0005549474 ovn_controller[154296]: 2025-12-07T10:07:35Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:6a:e2 10.100.0.11
Dec  7 05:07:35 np0005549474 lvm[263807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:07:35 np0005549474 lvm[263807]: VG ceph_vg0 finished
Dec  7 05:07:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:35 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:35 np0005549474 relaxed_kowalevski[263731]: {}
Dec  7 05:07:35 np0005549474 systemd[1]: libpod-45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d.scope: Deactivated successfully.
Dec  7 05:07:35 np0005549474 podman[263714]: 2025-12-07 10:07:35.326596496 +0000 UTC m=+1.000861007 container died 45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:07:35 np0005549474 systemd[1]: libpod-45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d.scope: Consumed 1.262s CPU time.
Dec  7 05:07:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-dff1d6df008126a9365dac2bbd95feacf1f6a41e9a3a3670b48e02e366b1b255-merged.mount: Deactivated successfully.
Dec  7 05:07:35 np0005549474 podman[263714]: 2025-12-07 10:07:35.36843524 +0000 UTC m=+1.042699751 container remove 45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:07:35 np0005549474 systemd[1]: libpod-conmon-45fb6200d2cd6b3922f1fa7be68b5bbc2fcadccef96d1d4ec3aa602ff63b984d.scope: Deactivated successfully.
Dec  7 05:07:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:07:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:07:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:36 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:36 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:36 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:07:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:36.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:36 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 95 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1004 KiB/s rd, 487 KiB/s wr, 38 op/s
Dec  7 05:07:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:07:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:37.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:07:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:37 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:38 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:38.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:38.618 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:07:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:38.620 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:07:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:38.621 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:07:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:38 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 120 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec  7 05:07:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:07:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:39.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:07:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:39 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:39 np0005549474 nova_compute[256753]: 2025-12-07 10:07:39.617 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:07:39 np0005549474 nova_compute[256753]: 2025-12-07 10:07:39.618 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:07:39 np0005549474 nova_compute[256753]: 2025-12-07 10:07:39.619 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:07:39 np0005549474 nova_compute[256753]: 2025-12-07 10:07:39.619 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:07:39 np0005549474 nova_compute[256753]: 2025-12-07 10:07:39.652 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:39 np0005549474 nova_compute[256753]: 2025-12-07 10:07:39.652 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:07:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:39] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:07:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:39] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:07:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:40 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:40 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:40 np0005549474 nova_compute[256753]: 2025-12-07 10:07:40.861 256757 INFO nova.compute.manager [None req-62cfc08a-ca0f-4a83-b0e7-db9fd59d0dd8 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Get console output#033[00m
Dec  7 05:07:40 np0005549474 nova_compute[256753]: 2025-12-07 10:07:40.866 256757 INFO oslo.privsep.daemon [None req-62cfc08a-ca0f-4a83-b0e7-db9fd59d0dd8 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp5rc7j7y8/privsep.sock']#033[00m
Dec  7 05:07:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  7 05:07:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:41 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:41 np0005549474 nova_compute[256753]: 2025-12-07 10:07:41.625 256757 INFO oslo.privsep.daemon [None req-62cfc08a-ca0f-4a83-b0e7-db9fd59d0dd8 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  7 05:07:41 np0005549474 nova_compute[256753]: 2025-12-07 10:07:41.483 263860 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  7 05:07:41 np0005549474 nova_compute[256753]: 2025-12-07 10:07:41.489 263860 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  7 05:07:41 np0005549474 nova_compute[256753]: 2025-12-07 10:07:41.493 263860 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  7 05:07:41 np0005549474 nova_compute[256753]: 2025-12-07 10:07:41.494 263860 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263860#033[00m
Dec  7 05:07:41 np0005549474 nova_compute[256753]: 2025-12-07 10:07:41.715 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:07:42
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'backups', 'vms', '.nfs', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:07:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:07:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:07:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:07:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:07:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:42 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:07:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:07:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  7 05:07:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:43 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:44 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:44.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:44 np0005549474 nova_compute[256753]: 2025-12-07 10:07:44.652 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:44 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:44 np0005549474 nova_compute[256753]: 2025-12-07 10:07:44.654 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  7 05:07:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:45 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7f0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:46 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:46 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Dec  7 05:07:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:47.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:07:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:47.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:07:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:47.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:07:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:47 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:48 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:48 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:07:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Dec  7 05:07:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[261507]: 07/12/2025 10:07:49 : epoch 693551b2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e4003c10 fd 38 proxy ignored for local
Dec  7 05:07:49 np0005549474 kernel: ganesha.nfsd[263893]: segfault at 50 ip 00007fb8b8d5432e sp 00007fb86e7fb210 error 4 in libntirpc.so.5.8[7fb8b8d39000+2c000] likely on CPU 0 (core 0, socket 0)
Dec  7 05:07:49 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 05:07:49 np0005549474 podman[263895]: 2025-12-07 10:07:49.312161166 +0000 UTC m=+0.114879312 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  7 05:07:49 np0005549474 podman[263896]: 2025-12-07 10:07:49.312716921 +0000 UTC m=+0.114441510 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  7 05:07:49 np0005549474 systemd[1]: Started Process Core Dump (PID 263931/UID 0).
Dec  7 05:07:49 np0005549474 nova_compute[256753]: 2025-12-07 10:07:49.656 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:07:49 np0005549474 nova_compute[256753]: 2025-12-07 10:07:49.658 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:07:49 np0005549474 nova_compute[256753]: 2025-12-07 10:07:49.658 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:07:49 np0005549474 nova_compute[256753]: 2025-12-07 10:07:49.659 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:07:49 np0005549474 nova_compute[256753]: 2025-12-07 10:07:49.681 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:49 np0005549474 nova_compute[256753]: 2025-12-07 10:07:49.681 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:07:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:49] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Dec  7 05:07:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:49] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Dec  7 05:07:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:50.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 71 KiB/s wr, 7 op/s
Dec  7 05:07:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:51.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:52 np0005549474 systemd-coredump[263940]: Process 261511 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007fb8b8d5432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 05:07:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:52 np0005549474 systemd[1]: systemd-coredump@9-263931-0.service: Deactivated successfully.
Dec  7 05:07:52 np0005549474 systemd[1]: systemd-coredump@9-263931-0.service: Consumed 1.031s CPU time.
Dec  7 05:07:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:52 np0005549474 podman[263951]: 2025-12-07 10:07:52.661876124 +0000 UTC m=+0.033950339 container died 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 05:07:52 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e1a8a7d6909f7cde947393cf67ae8ac732dfb99278fcf4edcfcfcebb1fc1e969-merged.mount: Deactivated successfully.
Dec  7 05:07:52 np0005549474 podman[263951]: 2025-12-07 10:07:52.70818533 +0000 UTC m=+0.080259505 container remove 501f3ba0072969d1653569a243969f9e556a26b8997c0aef685da2bfbfdac70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Dec  7 05:07:52 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 05:07:52 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 05:07:52 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.509s CPU time.
Dec  7 05:07:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  7 05:07:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:53.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:54.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:54 np0005549474 nova_compute[256753]: 2025-12-07 10:07:54.682 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:54 np0005549474 nova_compute[256753]: 2025-12-07 10:07:54.686 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:54.807 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:07:54 np0005549474 nova_compute[256753]: 2025-12-07 10:07:54.808 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:07:54.809 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:07:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 155 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 31 op/s
Dec  7 05:07:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:55.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:56 np0005549474 podman[263999]: 2025-12-07 10:07:56.289709716 +0000 UTC m=+0.093592261 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  7 05:07:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:56.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 155 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 30 op/s
Dec  7 05:07:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:07:57.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:07:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:07:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:07:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100757 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.655175) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077655233, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 894, "num_deletes": 257, "total_data_size": 1451322, "memory_usage": 1468576, "flush_reason": "Manual Compaction"}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077665383, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1419105, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23079, "largest_seqno": 23972, "table_properties": {"data_size": 1414600, "index_size": 2093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9685, "raw_average_key_size": 18, "raw_value_size": 1405427, "raw_average_value_size": 2734, "num_data_blocks": 93, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102008, "oldest_key_time": 1765102008, "file_creation_time": 1765102077, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 10300 microseconds, and 4657 cpu microseconds.
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.665470) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1419105 bytes OK
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.665492) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.667839) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.667857) EVENT_LOG_v1 {"time_micros": 1765102077667851, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.667875) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1446979, prev total WAL file size 1446979, number of live WAL files 2.
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.668524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1385KB)], [50(12MB)]
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077668677, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 14033013, "oldest_snapshot_seqno": -1}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5508 keys, 13864484 bytes, temperature: kUnknown
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077863442, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13864484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13827152, "index_size": 22438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 140734, "raw_average_key_size": 25, "raw_value_size": 13727099, "raw_average_value_size": 2492, "num_data_blocks": 915, "num_entries": 5508, "num_filter_entries": 5508, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102077, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.863705) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13864484 bytes
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.865326) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.0 rd, 71.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.0 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(19.7) write-amplify(9.8) OK, records in: 6040, records dropped: 532 output_compression: NoCompression
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.865345) EVENT_LOG_v1 {"time_micros": 1765102077865336, "job": 26, "event": "compaction_finished", "compaction_time_micros": 194826, "compaction_time_cpu_micros": 39038, "output_level": 6, "num_output_files": 1, "total_output_size": 13864484, "num_input_records": 6040, "num_output_records": 5508, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077865739, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077867630, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.668455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.867687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.867692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.867693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.867695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:07:57 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:07:57.867696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:07:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:07:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:07:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  7 05:07:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:07:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:07:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:07:59 np0005549474 nova_compute[256753]: 2025-12-07 10:07:59.685 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:59 np0005549474 nova_compute[256753]: 2025-12-07 10:07:59.687 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:07:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:59] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:07:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:07:59] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:08:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:00.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  7 05:08:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:01.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec  7 05:08:03 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 10.
Dec  7 05:08:03 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 05:08:03 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 1.509s CPU time.
Dec  7 05:08:03 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 05:08:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:03.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:03 np0005549474 podman[264102]: 2025-12-07 10:08:03.491457545 +0000 UTC m=+0.060001622 container create 86b15150039c9d7eeb4706ed22070546c97ca48fbdf6de36b8fff2ce0af601ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:08:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2af02a37708124f2e96d463c298575434318746b8e8668762d9879d7717b61/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2af02a37708124f2e96d463c298575434318746b8e8668762d9879d7717b61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2af02a37708124f2e96d463c298575434318746b8e8668762d9879d7717b61/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:03 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2af02a37708124f2e96d463c298575434318746b8e8668762d9879d7717b61/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:03 np0005549474 podman[264102]: 2025-12-07 10:08:03.464319153 +0000 UTC m=+0.032863280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:03 np0005549474 podman[264102]: 2025-12-07 10:08:03.573750215 +0000 UTC m=+0.142294332 container init 86b15150039c9d7eeb4706ed22070546c97ca48fbdf6de36b8fff2ce0af601ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:08:03 np0005549474 podman[264102]: 2025-12-07 10:08:03.58016805 +0000 UTC m=+0.148712127 container start 86b15150039c9d7eeb4706ed22070546c97ca48fbdf6de36b8fff2ce0af601ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:08:03 np0005549474 bash[264102]: 86b15150039c9d7eeb4706ed22070546c97ca48fbdf6de36b8fff2ce0af601ac
Dec  7 05:08:03 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 05:08:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:08:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:04 np0005549474 nova_compute[256753]: 2025-12-07 10:08:04.689 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:04 np0005549474 nova_compute[256753]: 2025-12-07 10:08:04.691 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:04 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:04.812 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:08:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec  7 05:08:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 78 op/s
Dec  7 05:08:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:08:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:07.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:08:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:07.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:07 np0005549474 nova_compute[256753]: 2025-12-07 10:08:07.793 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:08.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 493 KiB/s wr, 78 op/s
Dec  7 05:08:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:09 np0005549474 nova_compute[256753]: 2025-12-07 10:08:09.691 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:09 np0005549474 nova_compute[256753]: 2025-12-07 10:08:09.693 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:08:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:08:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:09] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:08:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:09] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:08:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Dec  7 05:08:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:08:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:08:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:08:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:08:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:08:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:08:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:08:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:08:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:12.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:12 np0005549474 nova_compute[256753]: 2025-12-07 10:08:12.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Dec  7 05:08:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:13 np0005549474 nova_compute[256753]: 2025-12-07 10:08:13.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:13 np0005549474 nova_compute[256753]: 2025-12-07 10:08:13.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:14.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.695 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.697 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.697 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.697 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.738 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.739 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.784 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.784 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.784 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:08:14 np0005549474 nova_compute[256753]: 2025-12-07 10:08:14.785 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:08:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 168 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 147 KiB/s wr, 87 op/s
Dec  7 05:08:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:08:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758444221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.308 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.395 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.395 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.595 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.597 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4467MB free_disk=59.92041015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.597 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.597 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.682 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Instance 85f56bb8-2b0e-4405-a313-156300c853e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.682 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.683 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:08:15 np0005549474 nova_compute[256753]: 2025-12-07 10:08:15.716 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 05:08:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 05:08:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:08:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826204743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:08:16 np0005549474 nova_compute[256753]: 2025-12-07 10:08:16.209 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:08:16 np0005549474 nova_compute[256753]: 2025-12-07 10:08:16.216 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:08:16 np0005549474 nova_compute[256753]: 2025-12-07 10:08:16.243 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:08:16 np0005549474 nova_compute[256753]: 2025-12-07 10:08:16.268 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:08:16 np0005549474 nova_compute[256753]: 2025-12-07 10:08:16.269 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1598000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:16.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 168 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 134 KiB/s wr, 13 op/s
Dec  7 05:08:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:17.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:08:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:17.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:17 np0005549474 nova_compute[256753]: 2025-12-07 10:08:17.270 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:17 np0005549474 nova_compute[256753]: 2025-12-07 10:08:17.271 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:17 np0005549474 nova_compute[256753]: 2025-12-07 10:08:17.271 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:08:17 np0005549474 nova_compute[256753]: 2025-12-07 10:08:17.272 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:08:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:18 np0005549474 nova_compute[256753]: 2025-12-07 10:08:18.227 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:08:18 np0005549474 nova_compute[256753]: 2025-12-07 10:08:18.228 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquired lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:08:18 np0005549474 nova_compute[256753]: 2025-12-07 10:08:18.228 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  7 05:08:18 np0005549474 nova_compute[256753]: 2025-12-07 10:08:18.229 256757 DEBUG nova.objects.instance [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 85f56bb8-2b0e-4405-a313-156300c853e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:08:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1598000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:08:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:18.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:08:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 199 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec  7 05:08:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:08:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:19.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:08:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100819 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:08:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:19 np0005549474 nova_compute[256753]: 2025-12-07 10:08:19.740 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:08:19 np0005549474 nova_compute[256753]: 2025-12-07 10:08:19.742 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:08:19 np0005549474 nova_compute[256753]: 2025-12-07 10:08:19.742 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:08:19 np0005549474 nova_compute[256753]: 2025-12-07 10:08:19.742 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:08:19 np0005549474 nova_compute[256753]: 2025-12-07 10:08:19.743 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:19 np0005549474 nova_compute[256753]: 2025-12-07 10:08:19.744 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:08:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:19] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Dec  7 05:08:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:19] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Dec  7 05:08:20 np0005549474 podman[264238]: 2025-12-07 10:08:20.288560902 +0000 UTC m=+0.090409404 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.292 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updating instance_info_cache with network_info: [{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.308 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Releasing lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.308 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.309 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.309 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.309 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:20 np0005549474 nova_compute[256753]: 2025-12-07 10:08:20.310 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:08:20 np0005549474 podman[264239]: 2025-12-07 10:08:20.330265442 +0000 UTC m=+0.129586915 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:08:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  7 05:08:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:21.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:21 np0005549474 nova_compute[256753]: 2025-12-07 10:08:21.787 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:08:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:22.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  7 05:08:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:24.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:24 np0005549474 nova_compute[256753]: 2025-12-07 10:08:24.744 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:08:24 np0005549474 nova_compute[256753]: 2025-12-07 10:08:24.745 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:24 np0005549474 nova_compute[256753]: 2025-12-07 10:08:24.745 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:08:24 np0005549474 nova_compute[256753]: 2025-12-07 10:08:24.745 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:08:24 np0005549474 nova_compute[256753]: 2025-12-07 10:08:24.746 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:08:24 np0005549474 nova_compute[256753]: 2025-12-07 10:08:24.747 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  7 05:08:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:25.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:26.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec  7 05:08:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:27.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:08:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:27 np0005549474 podman[264315]: 2025-12-07 10:08:27.25399877 +0000 UTC m=+0.067840146 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  7 05:08:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:08:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:08:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:28.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 132 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Dec  7 05:08:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:29 np0005549474 ovn_controller[154296]: 2025-12-07T10:08:29Z|00034|binding|INFO|Releasing lport c29113f5-93e1-45cf-a1b5-872e1cb341ba from this chassis (sb_readonly=0)
Dec  7 05:08:29 np0005549474 nova_compute[256753]: 2025-12-07 10:08:29.631 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:29 np0005549474 nova_compute[256753]: 2025-12-07 10:08:29.746 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:29] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:08:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:29] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:08:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:30.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.774 256757 DEBUG nova.compute.manager [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-changed-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.775 256757 DEBUG nova.compute.manager [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Refreshing instance network info cache due to event network-changed-231300d5-bcb5-4f0e-be76-d6422cfeb132. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.775 256757 DEBUG oslo_concurrency.lockutils [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.776 256757 DEBUG oslo_concurrency.lockutils [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.776 256757 DEBUG nova.network.neutron [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Refreshing network info cache for port 231300d5-bcb5-4f0e-be76-d6422cfeb132 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.915 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.915 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.915 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.916 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.916 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.917 256757 INFO nova.compute.manager [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Terminating instance#033[00m
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.918 256757 DEBUG nova.compute.manager [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  7 05:08:30 np0005549474 kernel: tap231300d5-bc (unregistering): left promiscuous mode
Dec  7 05:08:30 np0005549474 NetworkManager[49051]: <info>  [1765102110.9751] device (tap231300d5-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.978 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:30 np0005549474 ovn_controller[154296]: 2025-12-07T10:08:30Z|00035|binding|INFO|Releasing lport 231300d5-bcb5-4f0e-be76-d6422cfeb132 from this chassis (sb_readonly=0)
Dec  7 05:08:30 np0005549474 ovn_controller[154296]: 2025-12-07T10:08:30Z|00036|binding|INFO|Setting lport 231300d5-bcb5-4f0e-be76-d6422cfeb132 down in Southbound
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.980 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:30 np0005549474 ovn_controller[154296]: 2025-12-07T10:08:30Z|00037|binding|INFO|Removing iface tap231300d5-bc ovn-installed in OVS
Dec  7 05:08:30 np0005549474 nova_compute[256753]: 2025-12-07 10:08:30.983 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:30.988 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:6a:e2 10.100.0.11'], port_security=['fa:16:3e:28:6a:e2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '85f56bb8-2b0e-4405-a313-156300c853e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71f2a529-e890-4416-bb37-8ebbeaaf7d18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3afdebe-ce17-484e-8cc0-e268e6f58f98, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=231300d5-bcb5-4f0e-be76-d6422cfeb132) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:08:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:30.990 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 231300d5-bcb5-4f0e-be76-d6422cfeb132 in datapath ba5590d7-ace7-4d21-97d3-6f4299ad21a1 unbound from our chassis#033[00m
Dec  7 05:08:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:30.992 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba5590d7-ace7-4d21-97d3-6f4299ad21a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:08:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:30.995 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[e32143fe-d0ae-4256-b40c-cf92bb5e7ce9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:30.996 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1 namespace which is not needed anymore#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.008 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 63 KiB/s wr, 39 op/s
Dec  7 05:08:31 np0005549474 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  7 05:08:31 np0005549474 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.968s CPU time.
Dec  7 05:08:31 np0005549474 systemd-machined[217882]: Machine qemu-1-instance-00000001 terminated.
Dec  7 05:08:31 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [NOTICE]   (262437) : haproxy version is 2.8.14-c23fe91
Dec  7 05:08:31 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [NOTICE]   (262437) : path to executable is /usr/sbin/haproxy
Dec  7 05:08:31 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [WARNING]  (262437) : Exiting Master process...
Dec  7 05:08:31 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [ALERT]    (262437) : Current worker (262439) exited with code 143 (Terminated)
Dec  7 05:08:31 np0005549474 neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1[262420]: [WARNING]  (262437) : All workers exited. Exiting... (0)
Dec  7 05:08:31 np0005549474 systemd[1]: libpod-0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79.scope: Deactivated successfully.
Dec  7 05:08:31 np0005549474 conmon[262420]: conmon 0059b5baf13ec2076d2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79.scope/container/memory.events
Dec  7 05:08:31 np0005549474 podman[264362]: 2025-12-07 10:08:31.153937451 +0000 UTC m=+0.054090229 container died 0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.164 256757 INFO nova.virt.libvirt.driver [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Instance destroyed successfully.#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.165 256757 DEBUG nova.objects.instance [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'resources' on Instance uuid 85f56bb8-2b0e-4405-a313-156300c853e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:08:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79-userdata-shm.mount: Deactivated successfully.
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.183 256757 DEBUG nova.virt.libvirt.vif [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2095719119',display_name='tempest-TestNetworkBasicOps-server-2095719119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2095719119',id=1,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsVHMS8iocA8+Rh3fh2+y9lSS5qiLX7I8VOl9BfUUw2+sXQOsdN/jr824ramDTfkJWrUKjydtUaUwdlfo7Pw0CklT8ylELWbhX5dNUZiOWRtp5EZtMKgO29c1zzSh9SNA==',key_name='tempest-TestNetworkBasicOps-1528391493',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:07:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-hoqfzoha',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:07:22Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=85f56bb8-2b0e-4405-a313-156300c853e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.184 256757 DEBUG nova.network.os_vif_util [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.185 256757 DEBUG nova.network.os_vif_util [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.185 256757 DEBUG os_vif [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.187 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.187 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap231300d5-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.189 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-01e31e450c0f06cc2be77e780b42a3d48c6dcbcae8840a53ca1e277b3f0124fb-merged.mount: Deactivated successfully.
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.192 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.194 256757 INFO os_vif [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:6a:e2,bridge_name='br-int',has_traffic_filtering=True,id=231300d5-bcb5-4f0e-be76-d6422cfeb132,network=Network(ba5590d7-ace7-4d21-97d3-6f4299ad21a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap231300d5-bc')#033[00m
Dec  7 05:08:31 np0005549474 podman[264362]: 2025-12-07 10:08:31.196177046 +0000 UTC m=+0.096329824 container cleanup 0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:08:31 np0005549474 systemd[1]: libpod-conmon-0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79.scope: Deactivated successfully.
Dec  7 05:08:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:31 np0005549474 podman[264407]: 2025-12-07 10:08:31.27309542 +0000 UTC m=+0.054543313 container remove 0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:08:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.280 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[67d316dc-01e5-4642-87a0-fc2623a41021]: (4, ('Sun Dec  7 10:08:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1 (0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79)\n0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79\nSun Dec  7 10:08:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1 (0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79)\n0059b5baf13ec2076d2cc0cc160bf8d2a2ddb9322ce5e5d91e8a8ff4862cbb79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.282 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[4017f5b0-976a-41af-b9b5-866650dc280e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.282 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba5590d7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.284 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:31 np0005549474 kernel: tapba5590d7-a0: left promiscuous mode
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.312 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.314 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[4a74ec41-7dd6-4dc8-964e-2542bf0df656]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.328 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[880c0ea2-0f54-4a89-8d69-ed14b1b314ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.329 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[d65e1d49-537a-4fc1-8db9-436dbd9f1c62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.346 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[c6126e07-5518-4fce-b95e-72447c28a156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400757, 'reachable_time': 34782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264433, 'error': None, 'target': 'ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.361 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba5590d7-ace7-4d21-97d3-6f4299ad21a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:08:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:31.362 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[76e345f3-95b9-4ed6-8edb-ad48da00325d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:08:31 np0005549474 systemd[1]: run-netns-ovnmeta\x2dba5590d7\x2dace7\x2d4d21\x2d97d3\x2d6f4299ad21a1.mount: Deactivated successfully.
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.601 256757 INFO nova.virt.libvirt.driver [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Deleting instance files /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4_del#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.601 256757 INFO nova.virt.libvirt.driver [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Deletion of /var/lib/nova/instances/85f56bb8-2b0e-4405-a313-156300c853e4_del complete#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.653 256757 DEBUG nova.virt.libvirt.host [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.653 256757 INFO nova.virt.libvirt.host [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] UEFI support detected#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.655 256757 INFO nova.compute.manager [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.655 256757 DEBUG oslo.service.loopingcall [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.656 256757 DEBUG nova.compute.manager [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  7 05:08:31 np0005549474 nova_compute[256753]: 2025-12-07 10:08:31.656 256757 DEBUG nova.network.neutron [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.304 256757 DEBUG nova.network.neutron [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updated VIF entry in instance network info cache for port 231300d5-bcb5-4f0e-be76-d6422cfeb132. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.305 256757 DEBUG nova.network.neutron [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updating instance_info_cache with network_info: [{"id": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "address": "fa:16:3e:28:6a:e2", "network": {"id": "ba5590d7-ace7-4d21-97d3-6f4299ad21a1", "bridge": "br-int", "label": "tempest-network-smoke--660428823", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap231300d5-bc", "ovs_interfaceid": "231300d5-bcb5-4f0e-be76-d6422cfeb132", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.312 256757 DEBUG nova.network.neutron [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.331 256757 DEBUG oslo_concurrency.lockutils [req-0793349e-c4f6-4c2a-a445-25f24aeab41e req-f76bc568-85e4-4c4b-b129-d0802f49eeb9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-85f56bb8-2b0e-4405-a313-156300c853e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.334 256757 INFO nova.compute.manager [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Took 0.68 seconds to deallocate network for instance.#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.361 256757 DEBUG nova.compute.manager [req-392e8093-9996-4087-a4bf-c199d56cf5c1 req-9c4ea4d9-4039-427b-b790-a7725feac1b5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-vif-deleted-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.394 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.395 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.443 256757 DEBUG oslo_concurrency.processutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:08:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:08:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:32.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:08:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.886 256757 DEBUG nova.compute.manager [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-vif-unplugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.887 256757 DEBUG oslo_concurrency.lockutils [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.887 256757 DEBUG oslo_concurrency.lockutils [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.887 256757 DEBUG oslo_concurrency.lockutils [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.888 256757 DEBUG nova.compute.manager [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] No waiting events found dispatching network-vif-unplugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.888 256757 WARNING nova.compute.manager [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received unexpected event network-vif-unplugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.888 256757 DEBUG nova.compute.manager [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.888 256757 DEBUG oslo_concurrency.lockutils [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.889 256757 DEBUG oslo_concurrency.lockutils [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.889 256757 DEBUG oslo_concurrency.lockutils [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.889 256757 DEBUG nova.compute.manager [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] No waiting events found dispatching network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.889 256757 WARNING nova.compute.manager [req-9bad097e-e7cb-4171-924e-4c5b18b4bfb3 req-14020a3a-50d6-4ec0-b409-cd21a464a650 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Received unexpected event network-vif-plugged-231300d5-bcb5-4f0e-be76-d6422cfeb132 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.961 256757 DEBUG oslo_concurrency.processutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.968 256757 DEBUG nova.compute.provider_tree [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:08:32 np0005549474 nova_compute[256753]: 2025-12-07 10:08:32.986 256757 DEBUG nova.scheduler.client.report [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:08:33 np0005549474 nova_compute[256753]: 2025-12-07 10:08:33.011 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Dec  7 05:08:33 np0005549474 nova_compute[256753]: 2025-12-07 10:08:33.038 256757 INFO nova.scheduler.client.report [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Deleted allocations for instance 85f56bb8-2b0e-4405-a313-156300c853e4#033[00m
Dec  7 05:08:33 np0005549474 nova_compute[256753]: 2025-12-07 10:08:33.127 256757 DEBUG oslo_concurrency.lockutils [None req-c5aacfe0-0375-4950-b5bc-4d12c4c709b0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "85f56bb8-2b0e-4405-a313-156300c853e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:34 np0005549474 nova_compute[256753]: 2025-12-07 10:08:34.749 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 18 KiB/s wr, 56 op/s
Dec  7 05:08:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:35.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:36 np0005549474 nova_compute[256753]: 2025-12-07 10:08:36.189 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:08:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:08:36 np0005549474 nova_compute[256753]: 2025-12-07 10:08:36.812 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:36 np0005549474 nova_compute[256753]: 2025-12-07 10:08:36.893 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec  7 05:08:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:37.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:08:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:37.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:08:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:37.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.304520541 +0000 UTC m=+0.055174590 container create fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:08:37 np0005549474 systemd[1]: Started libpod-conmon-fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac.scope.
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.286456996 +0000 UTC m=+0.037111075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:37 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.406035796 +0000 UTC m=+0.156689895 container init fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.414919099 +0000 UTC m=+0.165573148 container start fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.418634001 +0000 UTC m=+0.169288110 container attach fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:08:37 np0005549474 goofy_wescoff[264655]: 167 167
Dec  7 05:08:37 np0005549474 systemd[1]: libpod-fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac.scope: Deactivated successfully.
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.42301829 +0000 UTC m=+0.173672379 container died fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:08:37 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8c608e735b3d3b9e18f668b1666ef9bab67c2db88cac6faa16b53dc5a9639119-merged.mount: Deactivated successfully.
Dec  7 05:08:37 np0005549474 podman[264638]: 2025-12-07 10:08:37.471405453 +0000 UTC m=+0.222059532 container remove fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:08:37 np0005549474 systemd[1]: libpod-conmon-fb961ba51d7c71136c80139ce861fd310724b956af1387bd0937a04fef0d7bac.scope: Deactivated successfully.
Dec  7 05:08:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:08:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:08:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:37 np0005549474 podman[264677]: 2025-12-07 10:08:37.68852484 +0000 UTC m=+0.054565683 container create fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:08:37 np0005549474 systemd[1]: Started libpod-conmon-fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469.scope.
Dec  7 05:08:37 np0005549474 podman[264677]: 2025-12-07 10:08:37.65927084 +0000 UTC m=+0.025311743 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:37 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:08:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f05f69c914adaeb6a3200ad37404d391ec9f0d245d2744408d1c4f78e74403/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f05f69c914adaeb6a3200ad37404d391ec9f0d245d2744408d1c4f78e74403/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f05f69c914adaeb6a3200ad37404d391ec9f0d245d2744408d1c4f78e74403/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f05f69c914adaeb6a3200ad37404d391ec9f0d245d2744408d1c4f78e74403/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f05f69c914adaeb6a3200ad37404d391ec9f0d245d2744408d1c4f78e74403/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:37 np0005549474 podman[264677]: 2025-12-07 10:08:37.805660423 +0000 UTC m=+0.171701246 container init fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:08:37 np0005549474 podman[264677]: 2025-12-07 10:08:37.824191449 +0000 UTC m=+0.190232302 container start fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_swirles, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:08:37 np0005549474 podman[264677]: 2025-12-07 10:08:37.828674422 +0000 UTC m=+0.194715265 container attach fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_swirles, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:08:38 np0005549474 nervous_swirles[264693]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:08:38 np0005549474 nervous_swirles[264693]: --> All data devices are unavailable
Dec  7 05:08:38 np0005549474 systemd[1]: libpod-fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469.scope: Deactivated successfully.
Dec  7 05:08:38 np0005549474 podman[264677]: 2025-12-07 10:08:38.233513261 +0000 UTC m=+0.599554114 container died fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:08:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b3f05f69c914adaeb6a3200ad37404d391ec9f0d245d2744408d1c4f78e74403-merged.mount: Deactivated successfully.
Dec  7 05:08:38 np0005549474 podman[264677]: 2025-12-07 10:08:38.287347683 +0000 UTC m=+0.653388536 container remove fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:08:38 np0005549474 systemd[1]: libpod-conmon-fb71862ccd9d70f797d43006a93fb31fb852381821755ee580acd891587d8469.scope: Deactivated successfully.
Dec  7 05:08:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:38.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:38.619 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:08:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:38.620 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:08:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:08:38.620 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:08:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.096810246 +0000 UTC m=+0.071851016 container create 4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 05:08:39 np0005549474 systemd[1]: Started libpod-conmon-4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e.scope.
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.068500431 +0000 UTC m=+0.043541271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.193370925 +0000 UTC m=+0.168411775 container init 4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.202275719 +0000 UTC m=+0.177316459 container start 4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rosalind, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.205329442 +0000 UTC m=+0.180370222 container attach 4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 05:08:39 np0005549474 elated_rosalind[264831]: 167 167
Dec  7 05:08:39 np0005549474 systemd[1]: libpod-4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e.scope: Deactivated successfully.
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.211601784 +0000 UTC m=+0.186642524 container died 4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 05:08:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b78ac6c0a50f1ff65e6681bfe23cd2ab8bf8f911e1639a3936c0c74e09033f73-merged.mount: Deactivated successfully.
Dec  7 05:08:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:39 np0005549474 podman[264815]: 2025-12-07 10:08:39.255772211 +0000 UTC m=+0.230812961 container remove 4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:08:39 np0005549474 systemd[1]: libpod-conmon-4344baa8718b47bd9517b2ad21f198f67527cb7b7f083a0cc02a0c3b56621a7e.scope: Deactivated successfully.
Dec  7 05:08:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.416211049 +0000 UTC m=+0.053169955 container create 73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 05:08:39 np0005549474 systemd[1]: Started libpod-conmon-73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad.scope.
Dec  7 05:08:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:08:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6171186e61b628ad16384c86b876906d56cf10830ed5e7d604ab1f77371c48cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6171186e61b628ad16384c86b876906d56cf10830ed5e7d604ab1f77371c48cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6171186e61b628ad16384c86b876906d56cf10830ed5e7d604ab1f77371c48cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6171186e61b628ad16384c86b876906d56cf10830ed5e7d604ab1f77371c48cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.38553133 +0000 UTC m=+0.022490256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.487497127 +0000 UTC m=+0.124456053 container init 73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.492870084 +0000 UTC m=+0.129828970 container start 73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.49563973 +0000 UTC m=+0.132598626 container attach 73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 05:08:39 np0005549474 funny_feistel[264874]: {
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:    "0": [
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:        {
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "devices": [
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "/dev/loop3"
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            ],
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "lv_name": "ceph_lv0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "lv_size": "21470642176",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "name": "ceph_lv0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "tags": {
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.cluster_name": "ceph",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.crush_device_class": "",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.encrypted": "0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.osd_id": "0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.type": "block",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.vdo": "0",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:                "ceph.with_tpm": "0"
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            },
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "type": "block",
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:            "vg_name": "ceph_vg0"
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:        }
Dec  7 05:08:39 np0005549474 funny_feistel[264874]:    ]
Dec  7 05:08:39 np0005549474 funny_feistel[264874]: }
Dec  7 05:08:39 np0005549474 nova_compute[256753]: 2025-12-07 10:08:39.750 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:39 np0005549474 systemd[1]: libpod-73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad.scope: Deactivated successfully.
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.779044669 +0000 UTC m=+0.416003565 container died 73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:08:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6171186e61b628ad16384c86b876906d56cf10830ed5e7d604ab1f77371c48cb-merged.mount: Deactivated successfully.
Dec  7 05:08:39 np0005549474 podman[264858]: 2025-12-07 10:08:39.822386904 +0000 UTC m=+0.459345840 container remove 73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_feistel, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 05:08:39 np0005549474 systemd[1]: libpod-conmon-73b5ee37c23fdd638a7435bbad6f81e49bcf9e7d3b418b1f2bc0003e465519ad.scope: Deactivated successfully.
Dec  7 05:08:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:39] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:08:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:39] "GET /metrics HTTP/1.1" 200 48384 "" "Prometheus/2.51.0"
Dec  7 05:08:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.526011632 +0000 UTC m=+0.071978719 container create 2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 05:08:40 np0005549474 systemd[1]: Started libpod-conmon-2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787.scope.
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.497630986 +0000 UTC m=+0.043598133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:40.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.640149283 +0000 UTC m=+0.186116410 container init 2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.651849873 +0000 UTC m=+0.197816960 container start 2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.655949685 +0000 UTC m=+0.201916842 container attach 2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:08:40 np0005549474 affectionate_kowalevski[265003]: 167 167
Dec  7 05:08:40 np0005549474 systemd[1]: libpod-2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787.scope: Deactivated successfully.
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.658789742 +0000 UTC m=+0.204756839 container died 2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:08:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3eb8fd880f028acb0ff5bf968fb1bad28c7375b8958c0913dda8c6f1d2d57e67-merged.mount: Deactivated successfully.
Dec  7 05:08:40 np0005549474 podman[264987]: 2025-12-07 10:08:40.707321879 +0000 UTC m=+0.253288946 container remove 2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:08:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:40 np0005549474 systemd[1]: libpod-conmon-2a493cf39cbd00e7c33435552de505871cad0de91caadb45683be1a3cd964787.scope: Deactivated successfully.
Dec  7 05:08:40 np0005549474 podman[265028]: 2025-12-07 10:08:40.937653547 +0000 UTC m=+0.067151997 container create 777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 05:08:40 np0005549474 systemd[1]: Started libpod-conmon-777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57.scope.
Dec  7 05:08:41 np0005549474 podman[265028]: 2025-12-07 10:08:40.912383416 +0000 UTC m=+0.041881866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:08:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:08:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c8f4ed22b326b7d01620531c897a9ada7b1d0c4dbbdde675e7b0b77901c60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c8f4ed22b326b7d01620531c897a9ada7b1d0c4dbbdde675e7b0b77901c60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c8f4ed22b326b7d01620531c897a9ada7b1d0c4dbbdde675e7b0b77901c60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181c8f4ed22b326b7d01620531c897a9ada7b1d0c4dbbdde675e7b0b77901c60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:08:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 36 op/s
Dec  7 05:08:41 np0005549474 podman[265028]: 2025-12-07 10:08:41.047144061 +0000 UTC m=+0.176642521 container init 777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:08:41 np0005549474 podman[265028]: 2025-12-07 10:08:41.06209876 +0000 UTC m=+0.191597210 container start 777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 05:08:41 np0005549474 podman[265028]: 2025-12-07 10:08:41.066377407 +0000 UTC m=+0.195875857 container attach 777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:08:41 np0005549474 nova_compute[256753]: 2025-12-07 10:08:41.191 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:41 np0005549474 lvm[265122]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:08:41 np0005549474 lvm[265122]: VG ceph_vg0 finished
Dec  7 05:08:41 np0005549474 festive_jepsen[265045]: {}
Dec  7 05:08:41 np0005549474 systemd[1]: libpod-777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57.scope: Deactivated successfully.
Dec  7 05:08:41 np0005549474 podman[265028]: 2025-12-07 10:08:41.765124202 +0000 UTC m=+0.894622662 container died 777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:08:41 np0005549474 systemd[1]: libpod-777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57.scope: Consumed 1.117s CPU time.
Dec  7 05:08:41 np0005549474 systemd[1]: var-lib-containers-storage-overlay-181c8f4ed22b326b7d01620531c897a9ada7b1d0c4dbbdde675e7b0b77901c60-merged.mount: Deactivated successfully.
Dec  7 05:08:41 np0005549474 podman[265028]: 2025-12-07 10:08:41.804777776 +0000 UTC m=+0.934276186 container remove 777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jepsen, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 05:08:41 np0005549474 systemd[1]: libpod-conmon-777dbcea482bd1ae8aa23c90b126e28f05580258a1281de0ffb9478af69a4a57.scope: Deactivated successfully.
Dec  7 05:08:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:08:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:08:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:08:42
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', '.nfs']
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:08:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:08:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:08:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:08:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:42.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:08:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:08:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:08:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  7 05:08:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:44.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:44 np0005549474 nova_compute[256753]: 2025-12-07 10:08:44.751 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  7 05:08:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:45.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:46 np0005549474 nova_compute[256753]: 2025-12-07 10:08:46.163 256757 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765102111.162621, 85f56bb8-2b0e-4405-a313-156300c853e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:08:46 np0005549474 nova_compute[256753]: 2025-12-07 10:08:46.164 256757 INFO nova.compute.manager [-] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] VM Stopped (Lifecycle Event)#033[00m
Dec  7 05:08:46 np0005549474 nova_compute[256753]: 2025-12-07 10:08:46.192 256757 DEBUG nova.compute.manager [None req-9d0fe04a-a515-4b9f-ade0-9b5c7af0b83a - - - - - -] [instance: 85f56bb8-2b0e-4405-a313-156300c853e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:08:46 np0005549474 nova_compute[256753]: 2025-12-07 10:08:46.195 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:46.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:08:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:47.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:08:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:47.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:48.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:08:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:49.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:49 np0005549474 nova_compute[256753]: 2025-12-07 10:08:49.754 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:49] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Dec  7 05:08:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:49] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Dec  7 05:08:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  7 05:08:51 np0005549474 nova_compute[256753]: 2025-12-07 10:08:51.196 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:51.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:51 np0005549474 podman[265195]: 2025-12-07 10:08:51.266696563 +0000 UTC m=+0.083597087 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec  7 05:08:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:51 np0005549474 podman[265196]: 2025-12-07 10:08:51.299894111 +0000 UTC m=+0.109277259 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:08:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:52.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:08:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:53.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:54.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:54 np0005549474 nova_compute[256753]: 2025-12-07 10:08:54.756 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  7 05:08:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:56 np0005549474 nova_compute[256753]: 2025-12-07 10:08:56.245 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:08:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:08:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:08:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:08:57.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:08:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:08:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:08:57 np0005549474 podman[265250]: 2025-12-07 10:08:57.439686176 +0000 UTC m=+0.115598781 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  7 05:08:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:08:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 79 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 26 op/s
Dec  7 05:08:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:08:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:08:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:08:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:08:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:08:59 np0005549474 nova_compute[256753]: 2025-12-07 10:08:59.759 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:08:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:59] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Dec  7 05:08:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:08:59] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Dec  7 05:09:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:09:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:09:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:09:01 np0005549474 nova_compute[256753]: 2025-12-07 10:09:01.247 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100902 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:09:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:02.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:09:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1661943840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:09:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:09:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1661943840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:09:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:09:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:04.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:04 np0005549474 nova_compute[256753]: 2025-12-07 10:09:04.762 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  7 05:09:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:05.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:05 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:06 np0005549474 nova_compute[256753]: 2025-12-07 10:09:06.249 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:06 np0005549474 nova_compute[256753]: 2025-12-07 10:09:06.391 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:09:06.391 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:09:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:09:06.393 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:09:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:06.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec  7 05:09:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:07.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:09:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:07.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:09:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:07.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:07 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:09:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:08.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:09:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  7 05:09:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:09 np0005549474 nova_compute[256753]: 2025-12-07 10:09:09.763 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:09] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Dec  7 05:09:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:09] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Dec  7 05:09:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:09:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5418 writes, 24K keys, 5418 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5418 writes, 5418 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1506 writes, 6509 keys, 1506 commit groups, 1.0 writes per commit group, ingest: 11.20 MB, 0.02 MB/s#012Interval WAL: 1506 writes, 1506 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.0      0.63              0.10        13    0.049       0      0       0.0       0.0#012  L6      1/0   13.22 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.2     84.1     73.0      2.11              0.42        12    0.175     61K   6222       0.0       0.0#012 Sum      1/0   13.22 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.2     64.7     69.5      2.74              0.52        25    0.110     61K   6222       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6     82.6     84.6      0.90              0.23        10    0.090     29K   2589       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     84.1     73.0      2.11              0.42        12    0.175     61K   6222       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     58.5      0.63              0.10        12    0.052       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.11 MB/s write, 0.17 GB read, 0.10 MB/s read, 2.7 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5637d9ea7350#2 capacity: 304.00 MB usage: 12.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000114 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(683,11.86 MB,3.90246%) FilterBlock(26,184.23 KB,0.059183%) IndexBlock(26,325.02 KB,0.104407%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 05:09:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.639264) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150639310, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 906, "num_deletes": 251, "total_data_size": 1363529, "memory_usage": 1385912, "flush_reason": "Manual Compaction"}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  7 05:09:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150659819, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1348967, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23973, "largest_seqno": 24878, "table_properties": {"data_size": 1344629, "index_size": 1990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10111, "raw_average_key_size": 19, "raw_value_size": 1335707, "raw_average_value_size": 2608, "num_data_blocks": 88, "num_entries": 512, "num_filter_entries": 512, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102078, "oldest_key_time": 1765102078, "file_creation_time": 1765102150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 20671 microseconds, and 7263 cpu microseconds.
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.659939) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1348967 bytes OK
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.659981) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.662189) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.662215) EVENT_LOG_v1 {"time_micros": 1765102150662210, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.662231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1359204, prev total WAL file size 1359204, number of live WAL files 2.
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.663130) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1317KB)], [53(13MB)]
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150663224, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15213451, "oldest_snapshot_seqno": -1}
Dec  7 05:09:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5504 keys, 12993804 bytes, temperature: kUnknown
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150808646, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12993804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12957391, "index_size": 21583, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 141333, "raw_average_key_size": 25, "raw_value_size": 12858113, "raw_average_value_size": 2336, "num_data_blocks": 876, "num_entries": 5504, "num_filter_entries": 5504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.809273) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12993804 bytes
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.811106) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.5 rd, 89.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.2 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(20.9) write-amplify(9.6) OK, records in: 6020, records dropped: 516 output_compression: NoCompression
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.811139) EVENT_LOG_v1 {"time_micros": 1765102150811125, "job": 28, "event": "compaction_finished", "compaction_time_micros": 145611, "compaction_time_cpu_micros": 45622, "output_level": 6, "num_output_files": 1, "total_output_size": 12993804, "num_input_records": 6020, "num_output_records": 5504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150811858, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150816965, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.662995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.817111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.817119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.817122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.817125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:09:10 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:09:10.817127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:09:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 313 KiB/s wr, 75 op/s
Dec  7 05:09:11 np0005549474 nova_compute[256753]: 2025-12-07 10:09:11.251 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:11 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:09:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:09:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:11.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:09:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:11 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:09:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:09:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:09:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:09:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:09:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:09:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:09:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:09:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec  7 05:09:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:13.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:13 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:09:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:09:14 np0005549474 ovn_controller[154296]: 2025-12-07T10:09:14Z|00038|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec  7 05:09:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:14.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:14 np0005549474 nova_compute[256753]: 2025-12-07 10:09:14.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:14 np0005549474 nova_compute[256753]: 2025-12-07 10:09:14.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:14 np0005549474 nova_compute[256753]: 2025-12-07 10:09:14.766 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 109 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Dec  7 05:09:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:15.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0024a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.788 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.788 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.789 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.789 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:09:15 np0005549474 nova_compute[256753]: 2025-12-07 10:09:15.790 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:09:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:09:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3932531656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.244 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.253 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:16 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:09:16.396 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.503 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.505 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4606MB free_disk=59.943607330322266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.505 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.506 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:09:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.579 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.580 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:09:16 np0005549474 nova_compute[256753]: 2025-12-07 10:09:16.596 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:09:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:16.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 109 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec  7 05:09:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:09:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487612971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:09:17 np0005549474 nova_compute[256753]: 2025-12-07 10:09:17.065 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:09:17 np0005549474 nova_compute[256753]: 2025-12-07 10:09:17.073 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:09:17 np0005549474 nova_compute[256753]: 2025-12-07 10:09:17.093 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:09:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:17.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:09:17 np0005549474 nova_compute[256753]: 2025-12-07 10:09:17.131 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:09:17 np0005549474 nova_compute[256753]: 2025-12-07 10:09:17.132 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:09:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:09:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.132 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.133 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.133 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.157 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.157 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.157 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.158 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:09:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0024a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:18.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:18 np0005549474 nova_compute[256753]: 2025-12-07 10:09:18.773 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec  7 05:09:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:19 np0005549474 nova_compute[256753]: 2025-12-07 10:09:19.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:09:19 np0005549474 nova_compute[256753]: 2025-12-07 10:09:19.770 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:19] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Dec  7 05:09:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:19] "GET /metrics HTTP/1.1" 200 48386 "" "Prometheus/2.51.0"
Dec  7 05:09:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:20.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0024a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:21 np0005549474 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  7 05:09:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec  7 05:09:21 np0005549474 nova_compute[256753]: 2025-12-07 10:09:21.255 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:21.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:22 np0005549474 podman[265366]: 2025-12-07 10:09:22.255963991 +0000 UTC m=+0.066326985 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  7 05:09:22 np0005549474 podman[265367]: 2025-12-07 10:09:22.315657513 +0000 UTC m=+0.126606333 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Dec  7 05:09:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100922 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:09:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec  7 05:09:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0037f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:23.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:24 np0005549474 nova_compute[256753]: 2025-12-07 10:09:24.773 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec  7 05:09:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:26 np0005549474 nova_compute[256753]: 2025-12-07 10:09:26.257 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0037f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 82 KiB/s wr, 20 op/s
Dec  7 05:09:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:27.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:09:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:27.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:09:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:09:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:09:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:28 np0005549474 podman[265440]: 2025-12-07 10:09:28.250879533 +0000 UTC m=+0.063645831 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  7 05:09:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:28.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0037f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 85 KiB/s wr, 20 op/s
Dec  7 05:09:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0037f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:29.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:29 np0005549474 nova_compute[256753]: 2025-12-07 10:09:29.775 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:29] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:09:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:29] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:09:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:30.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec  7 05:09:31 np0005549474 nova_compute[256753]: 2025-12-07 10:09:31.258 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:31.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:32.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Dec  7 05:09:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:34.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:34 np0005549474 nova_compute[256753]: 2025-12-07 10:09:34.777 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 16 KiB/s wr, 2 op/s
Dec  7 05:09:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:35.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:36 np0005549474 nova_compute[256753]: 2025-12-07 10:09:36.259 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 3.3 KiB/s wr, 1 op/s
Dec  7 05:09:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:37.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:09:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100937 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:09:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:37.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:09:38.619 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:09:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:09:38.620 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:09:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:09:38.620 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:09:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 58 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Dec  7 05:09:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:39 np0005549474 nova_compute[256753]: 2025-12-07 10:09:39.780 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:39] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:09:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:39] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:09:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Dec  7 05:09:41 np0005549474 nova_compute[256753]: 2025-12-07 10:09:41.261 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:41.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:09:42
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images']
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:09:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:09:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:09:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:42.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:09:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1598001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:09:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:09:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  7 05:09:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:43.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:43 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.637298396 +0000 UTC m=+0.068897194 container create cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:09:43 np0005549474 systemd[1]: Started libpod-conmon-cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa.scope.
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.613478995 +0000 UTC m=+0.045077843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:09:43 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.745717171 +0000 UTC m=+0.177316039 container init cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.753441052 +0000 UTC m=+0.185039870 container start cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.757061511 +0000 UTC m=+0.188660339 container attach cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:09:43 np0005549474 optimistic_roentgen[265699]: 167 167
Dec  7 05:09:43 np0005549474 systemd[1]: libpod-cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa.scope: Deactivated successfully.
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.760417082 +0000 UTC m=+0.192015880 container died cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:09:43 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8f2019242ea546e519f3b3210d8811d69df8ea8d772f727f01a482f8707e25b9-merged.mount: Deactivated successfully.
Dec  7 05:09:43 np0005549474 podman[265683]: 2025-12-07 10:09:43.806649747 +0000 UTC m=+0.238248535 container remove cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 05:09:43 np0005549474 systemd[1]: libpod-conmon-cc1b38aae9171591dfe4fa7305778e31fc56184adaa2a227eda8959ab5de22aa.scope: Deactivated successfully.
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.027981288 +0000 UTC m=+0.062650434 container create 80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 05:09:44 np0005549474 systemd[1]: Started libpod-conmon-80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2.scope.
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.006255324 +0000 UTC m=+0.040924540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:09:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:09:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cc619000aae62f9f5203cc053e3b2ff0149222fd602f531106f49249493a78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cc619000aae62f9f5203cc053e3b2ff0149222fd602f531106f49249493a78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cc619000aae62f9f5203cc053e3b2ff0149222fd602f531106f49249493a78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cc619000aae62f9f5203cc053e3b2ff0149222fd602f531106f49249493a78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cc619000aae62f9f5203cc053e3b2ff0149222fd602f531106f49249493a78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.130269185 +0000 UTC m=+0.164938351 container init 80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.145755299 +0000 UTC m=+0.180424445 container start 80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.148778321 +0000 UTC m=+0.183447517 container attach 80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:09:44 np0005549474 zealous_brown[265738]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:09:44 np0005549474 zealous_brown[265738]: --> All data devices are unavailable
Dec  7 05:09:44 np0005549474 systemd[1]: libpod-80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2.scope: Deactivated successfully.
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.558669328 +0000 UTC m=+0.593338464 container died 80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 05:09:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c9cc619000aae62f9f5203cc053e3b2ff0149222fd602f531106f49249493a78-merged.mount: Deactivated successfully.
Dec  7 05:09:44 np0005549474 podman[265722]: 2025-12-07 10:09:44.596235806 +0000 UTC m=+0.630904952 container remove 80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:09:44 np0005549474 systemd[1]: libpod-conmon-80969faacd1b3029e4f4b4df905f5c1587b0cbb2d9fea3c14afe62d0239fe4b2.scope: Deactivated successfully.
Dec  7 05:09:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:44.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:44 np0005549474 nova_compute[256753]: 2025-12-07 10:09:44.781 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  7 05:09:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1598001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:45.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.038903891 +0000 UTC m=+0.055136759 container create aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 05:09:46 np0005549474 systemd[1]: Started libpod-conmon-aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680.scope.
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.010870154 +0000 UTC m=+0.027103072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:09:46 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.137071105 +0000 UTC m=+0.153304023 container init aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_burnell, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.148679492 +0000 UTC m=+0.164912330 container start aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.1522757 +0000 UTC m=+0.168508568 container attach aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 05:09:46 np0005549474 relaxed_burnell[265879]: 167 167
Dec  7 05:09:46 np0005549474 systemd[1]: libpod-aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680.scope: Deactivated successfully.
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.157168924 +0000 UTC m=+0.173401792 container died aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 05:09:46 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c04e0c68563b29b8caec6ff2d6c6e59cebf4de26d9b75b3d329fc1f0f277c111-merged.mount: Deactivated successfully.
Dec  7 05:09:46 np0005549474 podman[265862]: 2025-12-07 10:09:46.213255588 +0000 UTC m=+0.229488456 container remove aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_burnell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 05:09:46 np0005549474 systemd[1]: libpod-conmon-aed05321022e52e797e98af6a00ae215228a4e66b3b3d528e0cbf8f7e8864680.scope: Deactivated successfully.
Dec  7 05:09:46 np0005549474 nova_compute[256753]: 2025-12-07 10:09:46.262 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.470299416 +0000 UTC m=+0.065136063 container create 998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:09:46 np0005549474 systemd[1]: Started libpod-conmon-998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7.scope.
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.445120298 +0000 UTC m=+0.039956955 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:09:46 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:09:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173a0c6dc3401a55bc0fbd8ced5113145ceabfa9412459c88bbcc0bc3b098f98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173a0c6dc3401a55bc0fbd8ced5113145ceabfa9412459c88bbcc0bc3b098f98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173a0c6dc3401a55bc0fbd8ced5113145ceabfa9412459c88bbcc0bc3b098f98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:46 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173a0c6dc3401a55bc0fbd8ced5113145ceabfa9412459c88bbcc0bc3b098f98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.576470749 +0000 UTC m=+0.171307436 container init 998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.590169083 +0000 UTC m=+0.185005700 container start 998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.593720471 +0000 UTC m=+0.188557118 container attach 998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:09:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:09:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:46.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:46 np0005549474 zen_buck[265920]: {
Dec  7 05:09:46 np0005549474 zen_buck[265920]:    "0": [
Dec  7 05:09:46 np0005549474 zen_buck[265920]:        {
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "devices": [
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "/dev/loop3"
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            ],
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "lv_name": "ceph_lv0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "lv_size": "21470642176",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "name": "ceph_lv0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "tags": {
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.cluster_name": "ceph",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.crush_device_class": "",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.encrypted": "0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.osd_id": "0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.type": "block",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.vdo": "0",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:                "ceph.with_tpm": "0"
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            },
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "type": "block",
Dec  7 05:09:46 np0005549474 zen_buck[265920]:            "vg_name": "ceph_vg0"
Dec  7 05:09:46 np0005549474 zen_buck[265920]:        }
Dec  7 05:09:46 np0005549474 zen_buck[265920]:    ]
Dec  7 05:09:46 np0005549474 zen_buck[265920]: }
Dec  7 05:09:46 np0005549474 systemd[1]: libpod-998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7.scope: Deactivated successfully.
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.924548776 +0000 UTC m=+0.519385423 container died 998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:09:46 np0005549474 systemd[1]: var-lib-containers-storage-overlay-173a0c6dc3401a55bc0fbd8ced5113145ceabfa9412459c88bbcc0bc3b098f98-merged.mount: Deactivated successfully.
Dec  7 05:09:46 np0005549474 podman[265904]: 2025-12-07 10:09:46.980376202 +0000 UTC m=+0.575212809 container remove 998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:09:46 np0005549474 systemd[1]: libpod-conmon-998939d349df2afe471613057ec53dd389bb0b9ef4c39ff7de66a3d1dfbf6ee7.scope: Deactivated successfully.
Dec  7 05:09:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  7 05:09:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:47.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:09:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:47.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.714376121 +0000 UTC m=+0.056007002 container create fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_proskuriakova, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 05:09:47 np0005549474 systemd[1]: Started libpod-conmon-fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a.scope.
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.686857939 +0000 UTC m=+0.028488820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:09:47 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.82406248 +0000 UTC m=+0.165693431 container init fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.835362259 +0000 UTC m=+0.176993140 container start fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.839316627 +0000 UTC m=+0.180947548 container attach fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_proskuriakova, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 05:09:47 np0005549474 great_proskuriakova[266051]: 167 167
Dec  7 05:09:47 np0005549474 systemd[1]: libpod-fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a.scope: Deactivated successfully.
Dec  7 05:09:47 np0005549474 conmon[266051]: conmon fc7b567f8399654e7a83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a.scope/container/memory.events
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.841168138 +0000 UTC m=+0.182799029 container died fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 05:09:47 np0005549474 systemd[1]: var-lib-containers-storage-overlay-86a3ed303d6afa7d5c77f0fdba9ce277e0988943074a604a30bdce9ba8869373-merged.mount: Deactivated successfully.
Dec  7 05:09:47 np0005549474 podman[266035]: 2025-12-07 10:09:47.885079179 +0000 UTC m=+0.226710060 container remove fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:09:47 np0005549474 systemd[1]: libpod-conmon-fc7b567f8399654e7a83c8b31e8bf23d620b6fd55fe83183c5342f63e4a1b35a.scope: Deactivated successfully.
Dec  7 05:09:48 np0005549474 podman[266075]: 2025-12-07 10:09:48.133550242 +0000 UTC m=+0.061680928 container create 3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:09:48 np0005549474 systemd[1]: Started libpod-conmon-3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b.scope.
Dec  7 05:09:48 np0005549474 podman[266075]: 2025-12-07 10:09:48.104537949 +0000 UTC m=+0.032668705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:09:48 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:09:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54df3958c91052025cf50cdec7e539ff5ef8d93b5475da524cd4e658fcddb205/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54df3958c91052025cf50cdec7e539ff5ef8d93b5475da524cd4e658fcddb205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54df3958c91052025cf50cdec7e539ff5ef8d93b5475da524cd4e658fcddb205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54df3958c91052025cf50cdec7e539ff5ef8d93b5475da524cd4e658fcddb205/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:09:48 np0005549474 podman[266075]: 2025-12-07 10:09:48.230364369 +0000 UTC m=+0.158495085 container init 3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 05:09:48 np0005549474 podman[266075]: 2025-12-07 10:09:48.243086016 +0000 UTC m=+0.171216672 container start 3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 05:09:48 np0005549474 podman[266075]: 2025-12-07 10:09:48.246389688 +0000 UTC m=+0.174520444 container attach 3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:09:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1598001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:48.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:48 np0005549474 lvm[266166]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:09:48 np0005549474 lvm[266166]: VG ceph_vg0 finished
Dec  7 05:09:48 np0005549474 practical_feistel[266091]: {}
Dec  7 05:09:49 np0005549474 systemd[1]: libpod-3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b.scope: Deactivated successfully.
Dec  7 05:09:49 np0005549474 systemd[1]: libpod-3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b.scope: Consumed 1.347s CPU time.
Dec  7 05:09:49 np0005549474 podman[266075]: 2025-12-07 10:09:49.032618714 +0000 UTC m=+0.960749370 container died 3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:09:49 np0005549474 systemd[1]: var-lib-containers-storage-overlay-54df3958c91052025cf50cdec7e539ff5ef8d93b5475da524cd4e658fcddb205-merged.mount: Deactivated successfully.
Dec  7 05:09:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Dec  7 05:09:49 np0005549474 podman[266075]: 2025-12-07 10:09:49.07523548 +0000 UTC m=+1.003366156 container remove 3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_feistel, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:09:49 np0005549474 systemd[1]: libpod-conmon-3fac02bee3989590d1ed090d1e504a142e90600f29a37598941fdc667e53bd6b.scope: Deactivated successfully.
Dec  7 05:09:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:09:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:09:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:09:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:09:49 np0005549474 nova_compute[256753]: 2025-12-07 10:09:49.784 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:49] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Dec  7 05:09:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:49] "GET /metrics HTTP/1.1" 200 48381 "" "Prometheus/2.51.0"
Dec  7 05:09:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:09:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  7 05:09:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:50.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1598001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:09:51 np0005549474 nova_compute[256753]: 2025-12-07 10:09:51.267 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 05:09:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:09:53 np0005549474 podman[266211]: 2025-12-07 10:09:53.286970887 +0000 UTC m=+0.101447335 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:09:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:53 np0005549474 podman[266212]: 2025-12-07 10:09:53.326144798 +0000 UTC m=+0.140467012 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  7 05:09:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:53.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:54.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:54 np0005549474 nova_compute[256753]: 2025-12-07 10:09:54.822 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:09:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:09:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:55.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:09:56 np0005549474 nova_compute[256753]: 2025-12-07 10:09:56.297 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:56.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  7 05:09:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:09:57.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:09:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:09:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:09:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:09:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:58.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec  7 05:09:59 np0005549474 podman[266261]: 2025-12-07 10:09:59.264663217 +0000 UTC m=+0.073119380 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  7 05:09:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:09:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:09:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/100959 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 05:09:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:09:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:09:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:09:59 np0005549474 nova_compute[256753]: 2025-12-07 10:09:59.825 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:09:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:59] "GET /metrics HTTP/1.1" 200 48372 "" "Prometheus/2.51.0"
Dec  7 05:09:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:09:59] "GET /metrics HTTP/1.1" 200 48372 "" "Prometheus/2.51.0"
Dec  7 05:10:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  7 05:10:00 np0005549474 ceph-mon[74516]: overall HEALTH_OK
Dec  7 05:10:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.612 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.613 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.634 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  7 05:10:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:00.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.722 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.723 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.732 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.732 256757 INFO nova.compute.claims [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  7 05:10:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:00 np0005549474 nova_compute[256753]: 2025-12-07 10:10:00.879 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.301 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:10:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58678575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.377 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.386 256757 DEBUG nova.compute.provider_tree [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.401 256757 DEBUG nova.scheduler.client.report [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:10:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.430 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.431 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.497 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.498 256757 DEBUG nova.network.neutron [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.520 256757 INFO nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.540 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.632 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.634 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.635 256757 INFO nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Creating image(s)#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.671 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.703 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.738 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.743 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.796 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.797 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.798 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.798 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.832 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:01 np0005549474 nova_compute[256753]: 2025-12-07 10:10:01.836 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b f9232f75-55c6-4982-8757-b2f3408b0ca4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.158 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b f9232f75-55c6-4982-8757-b2f3408b0ca4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.245 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] resizing rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.310 256757 DEBUG nova.policy [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.363 256757 DEBUG nova.objects.instance [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'migration_context' on Instance uuid f9232f75-55c6-4982-8757-b2f3408b0ca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.389 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.389 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Ensure instance console log exists: /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.389 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.390 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:02 np0005549474 nova_compute[256753]: 2025-12-07 10:10:02.390 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:02.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:10:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2061580697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:10:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:10:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2061580697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:10:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  7 05:10:03 np0005549474 nova_compute[256753]: 2025-12-07 10:10:03.225 256757 DEBUG nova.network.neutron [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Successfully created port: d7188451-df6a-4332-8055-1f51cc58facf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:10:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:03.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:03 np0005549474 nova_compute[256753]: 2025-12-07 10:10:03.935 256757 DEBUG nova.network.neutron [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Successfully updated port: d7188451-df6a-4332-8055-1f51cc58facf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:10:03 np0005549474 nova_compute[256753]: 2025-12-07 10:10:03.960 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:10:03 np0005549474 nova_compute[256753]: 2025-12-07 10:10:03.960 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:10:03 np0005549474 nova_compute[256753]: 2025-12-07 10:10:03.960 256757 DEBUG nova.network.neutron [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:10:04 np0005549474 nova_compute[256753]: 2025-12-07 10:10:04.034 256757 DEBUG nova.compute.manager [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-changed-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:10:04 np0005549474 nova_compute[256753]: 2025-12-07 10:10:04.035 256757 DEBUG nova.compute.manager [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Refreshing instance network info cache due to event network-changed-d7188451-df6a-4332-8055-1f51cc58facf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:10:04 np0005549474 nova_compute[256753]: 2025-12-07 10:10:04.036 256757 DEBUG oslo_concurrency.lockutils [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:10:04 np0005549474 nova_compute[256753]: 2025-12-07 10:10:04.095 256757 DEBUG nova.network.neutron [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:10:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:04.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:04 np0005549474 nova_compute[256753]: 2025-12-07 10:10:04.874 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.281 256757 DEBUG nova.network.neutron [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updating instance_info_cache with network_info: [{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.309 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.310 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Instance network_info: |[{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.310 256757 DEBUG oslo_concurrency.lockutils [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.311 256757 DEBUG nova.network.neutron [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Refreshing network info cache for port d7188451-df6a-4332-8055-1f51cc58facf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.317 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Start _get_guest_xml network_info=[{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': 'af7b5730-2fa9-449f-8ccb-a9519582f1b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.322 256757 WARNING nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:10:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:05 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.328 256757 DEBUG nova.virt.libvirt.host [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.329 256757 DEBUG nova.virt.libvirt.host [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.339 256757 DEBUG nova.virt.libvirt.host [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.340 256757 DEBUG nova.virt.libvirt.host [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.340 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.341 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-07T10:06:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bc1a767b-c985-4370-b41e-5cb294d603d7',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.342 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.342 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.343 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.343 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.344 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.344 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.345 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.345 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.345 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.346 256757 DEBUG nova.virt.hardware [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.350 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:10:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1637866266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.808 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.837 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:05 np0005549474 nova_compute[256753]: 2025-12-07 10:10:05.840 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:10:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478348342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.271 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.274 256757 DEBUG nova.virt.libvirt.vif [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:09:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1192553867',display_name='tempest-TestNetworkBasicOps-server-1192553867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1192553867',id=4,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPbS9aTbpy0X69C6m9JxdIrMBThePaZ9vqkS8QNE9/nY+zf5HOp8p3l9Geo7CIg7rz/Daes3m6cu2P4mTFia9frX4nXNnutbFgH8nFazNzjNquy/TGVPPZ31oy0Xas0rw==',key_name='tempest-TestNetworkBasicOps-707292887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-q05txiox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:10:01Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=f9232f75-55c6-4982-8757-b2f3408b0ca4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.275 256757 DEBUG nova.network.os_vif_util [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.276 256757 DEBUG nova.network.os_vif_util [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.278 256757 DEBUG nova.objects.instance [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid f9232f75-55c6-4982-8757-b2f3408b0ca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.298 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] End _get_guest_xml xml=<domain type="kvm">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <uuid>f9232f75-55c6-4982-8757-b2f3408b0ca4</uuid>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <name>instance-00000004</name>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <memory>131072</memory>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <vcpu>1</vcpu>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:name>tempest-TestNetworkBasicOps-server-1192553867</nova:name>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:creationTime>2025-12-07 10:10:05</nova:creationTime>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:flavor name="m1.nano">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:memory>128</nova:memory>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:disk>1</nova:disk>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:swap>0</nova:swap>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:vcpus>1</nova:vcpus>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </nova:flavor>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:owner>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </nova:owner>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <nova:ports>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <nova:port uuid="d7188451-df6a-4332-8055-1f51cc58facf">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        </nova:port>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </nova:ports>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </nova:instance>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <sysinfo type="smbios">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <entry name="manufacturer">RDO</entry>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <entry name="product">OpenStack Compute</entry>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <entry name="serial">f9232f75-55c6-4982-8757-b2f3408b0ca4</entry>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <entry name="uuid">f9232f75-55c6-4982-8757-b2f3408b0ca4</entry>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <entry name="family">Virtual Machine</entry>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <boot dev="hd"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <smbios mode="sysinfo"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <vmcoreinfo/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <clock offset="utc">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <timer name="pit" tickpolicy="delay"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <timer name="hpet" present="no"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <cpu mode="host-model" match="exact">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <topology sockets="1" cores="1" threads="1"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <disk type="network" device="disk">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/f9232f75-55c6-4982-8757-b2f3408b0ca4_disk">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <target dev="vda" bus="virtio"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <disk type="network" device="cdrom">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/f9232f75-55c6-4982-8757-b2f3408b0ca4_disk.config">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <target dev="sda" bus="sata"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <interface type="ethernet">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <mac address="fa:16:3e:f5:bd:b8"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <mtu size="1442"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <target dev="tapd7188451-df"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <serial type="pty">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <log file="/var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/console.log" append="off"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <input type="tablet" bus="usb"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <rng model="virtio">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <backend model="random">/dev/urandom</backend>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <controller type="usb" index="0"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    <memballoon model="virtio">
Dec  7 05:10:06 np0005549474 nova_compute[256753]:      <stats period="10"/>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:10:06 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:10:06 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:10:06 np0005549474 nova_compute[256753]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.299 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Preparing to wait for external event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.299 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.299 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.299 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.300 256757 DEBUG nova.virt.libvirt.vif [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:09:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1192553867',display_name='tempest-TestNetworkBasicOps-server-1192553867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1192553867',id=4,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPbS9aTbpy0X69C6m9JxdIrMBThePaZ9vqkS8QNE9/nY+zf5HOp8p3l9Geo7CIg7rz/Daes3m6cu2P4mTFia9frX4nXNnutbFgH8nFazNzjNquy/TGVPPZ31oy0Xas0rw==',key_name='tempest-TestNetworkBasicOps-707292887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-q05txiox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:10:01Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=f9232f75-55c6-4982-8757-b2f3408b0ca4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.300 256757 DEBUG nova.network.os_vif_util [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.300 256757 DEBUG nova.network.os_vif_util [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.301 256757 DEBUG os_vif [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.301 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.301 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.302 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.304 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.304 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7188451-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.305 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7188451-df, col_values=(('external_ids', {'iface-id': 'd7188451-df6a-4332-8055-1f51cc58facf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:bd:b8', 'vm-uuid': 'f9232f75-55c6-4982-8757-b2f3408b0ca4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.345 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:06 np0005549474 NetworkManager[49051]: <info>  [1765102206.3471] manager: (tapd7188451-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.349 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.355 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.356 256757 INFO os_vif [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df')#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.432 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.433 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.433 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:f5:bd:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.434 256757 INFO nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Using config drive#033[00m
Dec  7 05:10:06 np0005549474 nova_compute[256753]: 2025-12-07 10:10:06.470 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:06.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:10:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:07.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.295 256757 INFO nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Creating config drive at /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/disk.config#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.304 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9qrjn3u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:07 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.329 256757 DEBUG nova.network.neutron [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updated VIF entry in instance network info cache for port d7188451-df6a-4332-8055-1f51cc58facf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.330 256757 DEBUG nova.network.neutron [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updating instance_info_cache with network_info: [{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.350 256757 DEBUG oslo_concurrency.lockutils [req-ae04588a-dade-45c0-849c-6ab0b7a16883 req-91931fb4-022a-4ca3-90cd-422b7185b627 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:10:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:07.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.434 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9qrjn3u" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.476 256757 DEBUG nova.storage.rbd_utils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image f9232f75-55c6-4982-8757-b2f3408b0ca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.481 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/disk.config f9232f75-55c6-4982-8757-b2f3408b0ca4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.675 256757 DEBUG oslo_concurrency.processutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/disk.config f9232f75-55c6-4982-8757-b2f3408b0ca4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.676 256757 INFO nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Deleting local config drive /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4/disk.config because it was imported into RBD.#033[00m
Dec  7 05:10:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:07 np0005549474 systemd[1]: Starting libvirt secret daemon...
Dec  7 05:10:07 np0005549474 systemd[1]: Started libvirt secret daemon.
Dec  7 05:10:07 np0005549474 kernel: tapd7188451-df: entered promiscuous mode
Dec  7 05:10:07 np0005549474 NetworkManager[49051]: <info>  [1765102207.7936] manager: (tapd7188451-df): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  7 05:10:07 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:07Z|00039|binding|INFO|Claiming lport d7188451-df6a-4332-8055-1f51cc58facf for this chassis.
Dec  7 05:10:07 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:07Z|00040|binding|INFO|d7188451-df6a-4332-8055-1f51cc58facf: Claiming fa:16:3e:f5:bd:b8 10.100.0.7
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.795 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.806 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.817 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.821 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:bd:b8 10.100.0.7'], port_security=['fa:16:3e:f5:bd:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f9232f75-55c6-4982-8757-b2f3408b0ca4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47f93f67-6ce6-4959-9f55-c050bd0e7857', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0da5444-1ae1-4fbc-98ae-e56ff57f59da, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=d7188451-df6a-4332-8055-1f51cc58facf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.823 164143 INFO neutron.agent.ovn.metadata.agent [-] Port d7188451-df6a-4332-8055-1f51cc58facf in datapath e688201f-cd34-4e2e-8b69-c5b50ad0046c bound to our chassis#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.824 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e688201f-cd34-4e2e-8b69-c5b50ad0046c#033[00m
Dec  7 05:10:07 np0005549474 systemd-udevd[266657]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.838 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[742f38b3-e776-4650-b6ea-898c4aee1fdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.838 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape688201f-c1 in ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.841 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape688201f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.841 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[8eef0d8b-e4e1-4eff-b828-78a541617f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 systemd-machined[217882]: New machine qemu-2-instance-00000004.
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.843 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2623bf-f816-44f7-98f5-eb135af722f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 NetworkManager[49051]: <info>  [1765102207.8527] device (tapd7188451-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:10:07 np0005549474 NetworkManager[49051]: <info>  [1765102207.8544] device (tapd7188451-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:10:07 np0005549474 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.864 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[96c3b4a1-4fd9-4be3-83fc-aaebd370f995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.897 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[396f7a2c-ee6e-4447-a36e-d43b6601c5fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:07Z|00041|binding|INFO|Setting lport d7188451-df6a-4332-8055-1f51cc58facf ovn-installed in OVS
Dec  7 05:10:07 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:07Z|00042|binding|INFO|Setting lport d7188451-df6a-4332-8055-1f51cc58facf up in Southbound
Dec  7 05:10:07 np0005549474 nova_compute[256753]: 2025-12-07 10:10:07.905 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.931 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[a46e34bd-70c7-415f-b962-2ec12604b107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 systemd-udevd[266660]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:10:07 np0005549474 NetworkManager[49051]: <info>  [1765102207.9398] manager: (tape688201f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.941 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[aee2a067-8b19-45ab-8eab-2574d477bdeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.978 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5c7fe9-4247-4749-92af-925ef9c17fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:07 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:07.981 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[26597b80-5e9d-48a5-866c-6c7c2feb8172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 NetworkManager[49051]: <info>  [1765102208.0060] device (tape688201f-c0): carrier: link connected
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.011 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec40491-6590-4dbf-bf0b-050984610a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.032 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80946f-c2a1-4fcd-aae6-b24383219da7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape688201f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:89:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417066, 'reachable_time': 43936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266689, 'error': None, 'target': 'ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.053 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[bd26337a-088b-45ed-abea-84bce008c150]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:8901'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 417066, 'tstamp': 417066}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266697, 'error': None, 'target': 'ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.072 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[a1457681-78bc-4646-8bb0-8d7e5e67a388]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape688201f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:89:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417066, 'reachable_time': 43936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266707, 'error': None, 'target': 'ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.112 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[efa6d0ab-4f47-4f22-b207-575a0dda0e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.171 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[a537a6cd-66f5-4856-8eb6-b7158bf9c236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.172 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape688201f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.172 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.172 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape688201f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.174 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:08 np0005549474 NetworkManager[49051]: <info>  [1765102208.1746] manager: (tape688201f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec  7 05:10:08 np0005549474 kernel: tape688201f-c0: entered promiscuous mode
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.176 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.176 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape688201f-c0, col_values=(('external_ids', {'iface-id': '7a14ee45-97ac-4ee5-9f10-605e069f1a1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.177 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:08 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:08Z|00043|binding|INFO|Releasing lport 7a14ee45-97ac-4ee5-9f10-605e069f1a1e from this chassis (sb_readonly=0)
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.191 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.192 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e688201f-cd34-4e2e-8b69-c5b50ad0046c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e688201f-cd34-4e2e-8b69-c5b50ad0046c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.193 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[300084a2-c4d2-4ca9-adef-759ddda55067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.194 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-e688201f-cd34-4e2e-8b69-c5b50ad0046c
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/e688201f-cd34-4e2e-8b69-c5b50ad0046c.pid.haproxy
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID e688201f-cd34-4e2e-8b69-c5b50ad0046c
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.194 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'env', 'PROCESS_TAG=haproxy-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e688201f-cd34-4e2e-8b69-c5b50ad0046c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.254 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102208.253814, f9232f75-55c6-4982-8757-b2f3408b0ca4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.254 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] VM Started (Lifecycle Event)#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.291 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.295 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102208.253938, f9232f75-55c6-4982-8757-b2f3408b0ca4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.295 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] VM Paused (Lifecycle Event)#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.324 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.327 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.356 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.515 256757 DEBUG nova.compute.manager [req-8ff4e73b-1256-4432-bf4a-eb6a27fe8001 req-b0834118-bfc9-4706-8073-64741cdc6ab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.515 256757 DEBUG oslo_concurrency.lockutils [req-8ff4e73b-1256-4432-bf4a-eb6a27fe8001 req-b0834118-bfc9-4706-8073-64741cdc6ab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.516 256757 DEBUG oslo_concurrency.lockutils [req-8ff4e73b-1256-4432-bf4a-eb6a27fe8001 req-b0834118-bfc9-4706-8073-64741cdc6ab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.517 256757 DEBUG oslo_concurrency.lockutils [req-8ff4e73b-1256-4432-bf4a-eb6a27fe8001 req-b0834118-bfc9-4706-8073-64741cdc6ab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.517 256757 DEBUG nova.compute.manager [req-8ff4e73b-1256-4432-bf4a-eb6a27fe8001 req-b0834118-bfc9-4706-8073-64741cdc6ab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Processing event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.519 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.525 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102208.5245645, f9232f75-55c6-4982-8757-b2f3408b0ca4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.525 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] VM Resumed (Lifecycle Event)#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.529 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.534 256757 INFO nova.virt.libvirt.driver [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Instance spawned successfully.#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.535 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.566 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.573 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.576 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.576 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.576 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.577 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.577 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.577 256757 DEBUG nova.virt.libvirt.driver [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:10:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.606 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:10:08 np0005549474 podman[266765]: 2025-12-07 10:10:08.621648314 +0000 UTC m=+0.076402080 container create 60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.640 256757 INFO nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Took 7.01 seconds to spawn the instance on the hypervisor.#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.640 256757 DEBUG nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:10:08 np0005549474 podman[266765]: 2025-12-07 10:10:08.579622345 +0000 UTC m=+0.034376171 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:10:08 np0005549474 systemd[1]: Started libpod-conmon-60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342.scope.
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.708 256757 INFO nova.compute.manager [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Took 8.03 seconds to build instance.#033[00m
Dec  7 05:10:08 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:08 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d10d48703cb8cf9180a4f20d1306e7ce7c1b5c95aa05e07248cd2c5d851dbd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:08.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.735 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.740 256757 DEBUG oslo_concurrency.lockutils [None req-59d6868e-fda2-461d-90ef-6ad8d2fc8114 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:08 np0005549474 nova_compute[256753]: 2025-12-07 10:10:08.740 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:08 np0005549474 podman[266765]: 2025-12-07 10:10:08.748192694 +0000 UTC m=+0.202946440 container init 60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:10:08 np0005549474 podman[266765]: 2025-12-07 10:10:08.753831068 +0000 UTC m=+0.208584794 container start 60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:10:08 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [NOTICE]   (266785) : New worker (266787) forked
Dec  7 05:10:08 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [NOTICE]   (266785) : Loading success.
Dec  7 05:10:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:08 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:08.839 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:10:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec  7 05:10:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:09 np0005549474 nova_compute[256753]: 2025-12-07 10:10:09.908 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:09] "GET /metrics HTTP/1.1" 200 48372 "" "Prometheus/2.51.0"
Dec  7 05:10:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:09] "GET /metrics HTTP/1.1" 200 48372 "" "Prometheus/2.51.0"
Dec  7 05:10:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:10 np0005549474 nova_compute[256753]: 2025-12-07 10:10:10.622 256757 DEBUG nova.compute.manager [req-49039f90-8a95-42ff-9685-7416fcc2f162 req-42664e53-54f5-4126-ba84-f451296b9df8 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:10:10 np0005549474 nova_compute[256753]: 2025-12-07 10:10:10.623 256757 DEBUG oslo_concurrency.lockutils [req-49039f90-8a95-42ff-9685-7416fcc2f162 req-42664e53-54f5-4126-ba84-f451296b9df8 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:10 np0005549474 nova_compute[256753]: 2025-12-07 10:10:10.623 256757 DEBUG oslo_concurrency.lockutils [req-49039f90-8a95-42ff-9685-7416fcc2f162 req-42664e53-54f5-4126-ba84-f451296b9df8 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:10 np0005549474 nova_compute[256753]: 2025-12-07 10:10:10.624 256757 DEBUG oslo_concurrency.lockutils [req-49039f90-8a95-42ff-9685-7416fcc2f162 req-42664e53-54f5-4126-ba84-f451296b9df8 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:10 np0005549474 nova_compute[256753]: 2025-12-07 10:10:10.625 256757 DEBUG nova.compute.manager [req-49039f90-8a95-42ff-9685-7416fcc2f162 req-42664e53-54f5-4126-ba84-f451296b9df8 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] No waiting events found dispatching network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:10:10 np0005549474 nova_compute[256753]: 2025-12-07 10:10:10.625 256757 WARNING nova.compute.manager [req-49039f90-8a95-42ff-9685-7416fcc2f162 req-42664e53-54f5-4126-ba84-f451296b9df8 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received unexpected event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf for instance with vm_state active and task_state None.#033[00m
Dec  7 05:10:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:10.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec  7 05:10:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:11 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:11 np0005549474 nova_compute[256753]: 2025-12-07 10:10:11.345 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:11.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:10:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:10:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:10:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:10:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:10:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:10:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:10:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:10:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:12.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15980098e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:12Z|00044|binding|INFO|Releasing lport 7a14ee45-97ac-4ee5-9f10-605e069f1a1e from this chassis (sb_readonly=0)
Dec  7 05:10:12 np0005549474 NetworkManager[49051]: <info>  [1765102212.8995] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  7 05:10:12 np0005549474 NetworkManager[49051]: <info>  [1765102212.9011] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  7 05:10:12 np0005549474 nova_compute[256753]: 2025-12-07 10:10:12.901 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:12 np0005549474 nova_compute[256753]: 2025-12-07 10:10:12.949 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:12Z|00045|binding|INFO|Releasing lport 7a14ee45-97ac-4ee5-9f10-605e069f1a1e from this chassis (sb_readonly=0)
Dec  7 05:10:12 np0005549474 nova_compute[256753]: 2025-12-07 10:10:12.958 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec  7 05:10:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:13 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:13.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:13 np0005549474 nova_compute[256753]: 2025-12-07 10:10:13.584 256757 DEBUG nova.compute.manager [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-changed-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:10:13 np0005549474 nova_compute[256753]: 2025-12-07 10:10:13.585 256757 DEBUG nova.compute.manager [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Refreshing instance network info cache due to event network-changed-d7188451-df6a-4332-8055-1f51cc58facf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:10:13 np0005549474 nova_compute[256753]: 2025-12-07 10:10:13.585 256757 DEBUG oslo_concurrency.lockutils [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:10:13 np0005549474 nova_compute[256753]: 2025-12-07 10:10:13.586 256757 DEBUG oslo_concurrency.lockutils [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:10:13 np0005549474 nova_compute[256753]: 2025-12-07 10:10:13.586 256757 DEBUG nova.network.neutron [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Refreshing network info cache for port d7188451-df6a-4332-8055-1f51cc58facf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:10:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:14.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:14 np0005549474 nova_compute[256753]: 2025-12-07 10:10:14.910 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  7 05:10:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:15.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.395 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.599 256757 DEBUG nova.network.neutron [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updated VIF entry in instance network info cache for port d7188451-df6a-4332-8055-1f51cc58facf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.600 256757 DEBUG nova.network.neutron [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updating instance_info_cache with network_info: [{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.620 256757 DEBUG oslo_concurrency.lockutils [req-fa11b09b-c127-4cfb-a63b-6edcc8a4df93 req-fdc03395-2c8e-473f-b332-05ae6e061c9c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:10:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:16.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:16 np0005549474 nova_compute[256753]: 2025-12-07 10:10:16.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:10:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  7 05:10:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:17.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:10:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:17.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.793 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.793 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.793 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.794 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:10:17 np0005549474 nova_compute[256753]: 2025-12-07 10:10:17.794 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:17 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:17.841 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:10:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:10:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228523150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.259 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.323 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.324 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.463 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.464 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4367MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.465 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.465 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.580 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Instance f9232f75-55c6-4982-8757-b2f3408b0ca4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.580 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.581 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:10:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:18 np0005549474 nova_compute[256753]: 2025-12-07 10:10:18.617 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:10:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:18.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  7 05:10:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:10:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148855949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:10:19 np0005549474 nova_compute[256753]: 2025-12-07 10:10:19.093 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:10:19 np0005549474 nova_compute[256753]: 2025-12-07 10:10:19.098 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:10:19 np0005549474 nova_compute[256753]: 2025-12-07 10:10:19.117 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:10:19 np0005549474 nova_compute[256753]: 2025-12-07 10:10:19.141 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:10:19 np0005549474 nova_compute[256753]: 2025-12-07 10:10:19.142 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:19.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:19 np0005549474 nova_compute[256753]: 2025-12-07 10:10:19.936 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:19] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:10:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:19] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.143 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.143 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.144 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.144 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.349 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.349 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquired lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.350 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  7 05:10:20 np0005549474 nova_compute[256753]: 2025-12-07 10:10:20.350 256757 DEBUG nova.objects.instance [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9232f75-55c6-4982-8757-b2f3408b0ca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:10:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:20 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:20Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:bd:b8 10.100.0.7
Dec  7 05:10:20 np0005549474 ovn_controller[154296]: 2025-12-07T10:10:20Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:bd:b8 10.100.0.7
Dec  7 05:10:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 92 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 349 KiB/s wr, 69 op/s
Dec  7 05:10:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:21 np0005549474 nova_compute[256753]: 2025-12-07 10:10:21.397 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:21 np0005549474 nova_compute[256753]: 2025-12-07 10:10:21.517 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updating instance_info_cache with network_info: [{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:10:21 np0005549474 nova_compute[256753]: 2025-12-07 10:10:21.534 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Releasing lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:10:21 np0005549474 nova_compute[256753]: 2025-12-07 10:10:21.534 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  7 05:10:21 np0005549474 nova_compute[256753]: 2025-12-07 10:10:21.535 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 92 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 337 KiB/s wr, 56 op/s
Dec  7 05:10:23 np0005549474 nova_compute[256753]: 2025-12-07 10:10:23.140 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:10:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:23.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:24 np0005549474 podman[266887]: 2025-12-07 10:10:24.273012052 +0000 UTC m=+0.078753734 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  7 05:10:24 np0005549474 podman[266888]: 2025-12-07 10:10:24.315161724 +0000 UTC m=+0.109024732 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  7 05:10:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:24 np0005549474 nova_compute[256753]: 2025-12-07 10:10:24.970 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Dec  7 05:10:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580002ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:25.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:26 np0005549474 nova_compute[256753]: 2025-12-07 10:10:26.445 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:26 np0005549474 nova_compute[256753]: 2025-12-07 10:10:26.741 256757 INFO nova.compute.manager [None req-1e759f39-439f-4aba-8638-70f7aee6c2c2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Get console output#033[00m
Dec  7 05:10:26 np0005549474 nova_compute[256753]: 2025-12-07 10:10:26.745 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:10:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  7 05:10:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:27.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:10:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:10:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:10:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:27.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:28 np0005549474 nova_compute[256753]: 2025-12-07 10:10:28.703 256757 DEBUG nova.compute.manager [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-changed-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:10:28 np0005549474 nova_compute[256753]: 2025-12-07 10:10:28.704 256757 DEBUG nova.compute.manager [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Refreshing instance network info cache due to event network-changed-d7188451-df6a-4332-8055-1f51cc58facf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:10:28 np0005549474 nova_compute[256753]: 2025-12-07 10:10:28.704 256757 DEBUG oslo_concurrency.lockutils [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:10:28 np0005549474 nova_compute[256753]: 2025-12-07 10:10:28.704 256757 DEBUG oslo_concurrency.lockutils [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:10:28 np0005549474 nova_compute[256753]: 2025-12-07 10:10:28.704 256757 DEBUG nova.network.neutron [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Refreshing network info cache for port d7188451-df6a-4332-8055-1f51cc58facf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:10:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  7 05:10:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:29 np0005549474 nova_compute[256753]: 2025-12-07 10:10:29.973 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:29] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:10:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:29] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:10:30 np0005549474 podman[266939]: 2025-12-07 10:10:30.266263009 +0000 UTC m=+0.068014301 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:10:30 np0005549474 nova_compute[256753]: 2025-12-07 10:10:30.363 256757 DEBUG nova.network.neutron [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updated VIF entry in instance network info cache for port d7188451-df6a-4332-8055-1f51cc58facf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:10:30 np0005549474 nova_compute[256753]: 2025-12-07 10:10:30.364 256757 DEBUG nova.network.neutron [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updating instance_info_cache with network_info: [{"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:10:30 np0005549474 nova_compute[256753]: 2025-12-07 10:10:30.381 256757 DEBUG oslo_concurrency.lockutils [req-558253fb-5c74-4a00-ad53-8d3d86e559e1 req-b107d2b5-4033-48a4-8640-7a7ddfe9818f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-f9232f75-55c6-4982-8757-b2f3408b0ca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:10:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/101030 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:10:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:30.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  7 05:10:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:31 np0005549474 nova_compute[256753]: 2025-12-07 10:10:31.446 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Dec  7 05:10:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:34 np0005549474 nova_compute[256753]: 2025-12-07 10:10:34.976 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Dec  7 05:10:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:36 np0005549474 nova_compute[256753]: 2025-12-07 10:10:36.448 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:36.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  7 05:10:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:37.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:10:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:37.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:38.620 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:10:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:38.621 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:10:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:10:38.622 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:10:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:38.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 158 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Dec  7 05:10:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  7 05:10:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  7 05:10:39 np0005549474 nova_compute[256753]: 2025-12-07 10:10:39.978 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:39] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:10:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:39] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:10:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:40.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:10:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800037f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:41 np0005549474 nova_compute[256753]: 2025-12-07 10:10:41.452 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:41.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:10:42
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', '.rgw.root']
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:10:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:10:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:10:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:10:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:42.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011057152275835123 of space, bias 1.0, pg target 0.3317145682750537 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:10:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:10:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:10:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:10:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 9158 writes, 34K keys, 9158 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9158 writes, 2255 syncs, 4.06 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1505 writes, 4088 keys, 1505 commit groups, 1.0 writes per commit group, ingest: 3.55 MB, 0.01 MB/s#012Interval WAL: 1505 writes, 680 syncs, 2.21 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  7 05:10:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:44.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:44 np0005549474 nova_compute[256753]: 2025-12-07 10:10:44.981 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:10:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:45.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:46 np0005549474 nova_compute[256753]: 2025-12-07 10:10:46.499 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:10:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:47.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:10:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:47.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec  7 05:10:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:49 np0005549474 nova_compute[256753]: 2025-12-07 10:10:49.984 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:49] "GET /metrics HTTP/1.1" 200 48405 "" "Prometheus/2.51.0"
Dec  7 05:10:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:49] "GET /metrics HTTP/1.1" 200 48405 "" "Prometheus/2.51.0"
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 240 KiB/s wr, 74 op/s
Dec  7 05:10:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.429314928 +0000 UTC m=+0.063609644 container create 5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:10:51 np0005549474 systemd[1]: Started libpod-conmon-5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4.scope.
Dec  7 05:10:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:51.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.398720245 +0000 UTC m=+0.033015011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:51 np0005549474 nova_compute[256753]: 2025-12-07 10:10:51.502 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.527911234 +0000 UTC m=+0.162205990 container init 5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.537094844 +0000 UTC m=+0.171389520 container start 5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.540690703 +0000 UTC m=+0.174985389 container attach 5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:10:51 np0005549474 jolly_dewdney[267270]: 167 167
Dec  7 05:10:51 np0005549474 systemd[1]: libpod-5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4.scope: Deactivated successfully.
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.546743118 +0000 UTC m=+0.181037824 container died 5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-23fca9f4422028d47acbcc69694b8f55d170f66b0052f5b5d47cbc0ced7091b5-merged.mount: Deactivated successfully.
Dec  7 05:10:51 np0005549474 podman[267253]: 2025-12-07 10:10:51.605747185 +0000 UTC m=+0.240041871 container remove 5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:10:51 np0005549474 systemd[1]: libpod-conmon-5447cf625a7245a2d939f3a655dc89052c634d86d085f62d86b2220b64bd95c4.scope: Deactivated successfully.
Dec  7 05:10:51 np0005549474 podman[267293]: 2025-12-07 10:10:51.789999926 +0000 UTC m=+0.036308390 container create a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:10:51 np0005549474 systemd[1]: Started libpod-conmon-a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb.scope.
Dec  7 05:10:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f9ca439a8b4194f93d71500a90f74076c29682ef3df6ab38a75098c18fade8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f9ca439a8b4194f93d71500a90f74076c29682ef3df6ab38a75098c18fade8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f9ca439a8b4194f93d71500a90f74076c29682ef3df6ab38a75098c18fade8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f9ca439a8b4194f93d71500a90f74076c29682ef3df6ab38a75098c18fade8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:51 np0005549474 podman[267293]: 2025-12-07 10:10:51.773788024 +0000 UTC m=+0.020096508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:51 np0005549474 podman[267293]: 2025-12-07 10:10:51.876965215 +0000 UTC m=+0.123273679 container init a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_montalcini, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:10:51 np0005549474 podman[267293]: 2025-12-07 10:10:51.884450149 +0000 UTC m=+0.130758623 container start a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:10:51 np0005549474 podman[267293]: 2025-12-07 10:10:51.887954375 +0000 UTC m=+0.134262839 container attach a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:10:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]: [
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:    {
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "available": false,
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "being_replaced": false,
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "ceph_device_lvm": false,
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "lsm_data": {},
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "lvs": [],
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "path": "/dev/sr0",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "rejected_reasons": [
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "Has a FileSystem",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "Insufficient space (<5GB)"
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        ],
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        "sys_api": {
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "actuators": null,
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "device_nodes": [
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:                "sr0"
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            ],
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "devname": "sr0",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "human_readable_size": "482.00 KB",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "id_bus": "ata",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "model": "QEMU DVD-ROM",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "nr_requests": "2",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "parent": "/dev/sr0",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "partitions": {},
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "path": "/dev/sr0",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "removable": "1",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "rev": "2.5+",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "ro": "0",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "rotational": "1",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "sas_address": "",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "sas_device_handle": "",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "scheduler_mode": "mq-deadline",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "sectors": 0,
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "sectorsize": "2048",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "size": 493568.0,
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "support_discard": "2048",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "type": "disk",
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:            "vendor": "QEMU"
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:        }
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]:    }
Dec  7 05:10:52 np0005549474 jovial_montalcini[267309]: ]
Dec  7 05:10:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:52.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:52 np0005549474 systemd[1]: libpod-a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb.scope: Deactivated successfully.
Dec  7 05:10:52 np0005549474 conmon[267309]: conmon a70c8bcf0c32bfa98dde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb.scope/container/memory.events
Dec  7 05:10:52 np0005549474 podman[267293]: 2025-12-07 10:10:52.805111794 +0000 UTC m=+1.051420268 container died a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:52 np0005549474 systemd[1]: var-lib-containers-storage-overlay-02f9ca439a8b4194f93d71500a90f74076c29682ef3df6ab38a75098c18fade8-merged.mount: Deactivated successfully.
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:52 np0005549474 podman[267293]: 2025-12-07 10:10:52.849302858 +0000 UTC m=+1.095611332 container remove a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:10:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:52 np0005549474 systemd[1]: libpod-conmon-a70c8bcf0c32bfa98dde69bcee0be778416caec16ad5a6067d9c459434ebe8fb.scope: Deactivated successfully.
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:10:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:10:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  7 05:10:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:53.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.588271762 +0000 UTC m=+0.042666013 container create 817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:10:53 np0005549474 systemd[1]: Started libpod-conmon-817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a.scope.
Dec  7 05:10:53 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.573063088 +0000 UTC m=+0.027457359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.680953418 +0000 UTC m=+0.135347689 container init 817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.690873748 +0000 UTC m=+0.145267999 container start 817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_liskov, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.695192276 +0000 UTC m=+0.149586527 container attach 817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_liskov, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:53 np0005549474 keen_liskov[268786]: 167 167
Dec  7 05:10:53 np0005549474 systemd[1]: libpod-817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a.scope: Deactivated successfully.
Dec  7 05:10:53 np0005549474 conmon[268786]: conmon 817554b241f0c17a9906 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a.scope/container/memory.events
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.699966256 +0000 UTC m=+0.154360527 container died 817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_liskov, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 05:10:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c4d15f040a357fa03a8fb07f89b2b2e6c304c69fa26ba8ac12b14cbd95193596-merged.mount: Deactivated successfully.
Dec  7 05:10:53 np0005549474 podman[268769]: 2025-12-07 10:10:53.742076123 +0000 UTC m=+0.196470374 container remove 817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_liskov, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 05:10:53 np0005549474 systemd[1]: libpod-conmon-817554b241f0c17a9906fad17d679c811a34e0f23625614895c3147abc6de00a.scope: Deactivated successfully.
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:53 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:10:53 np0005549474 podman[268809]: 2025-12-07 10:10:53.942161265 +0000 UTC m=+0.041865391 container create b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:10:53 np0005549474 systemd[1]: Started libpod-conmon-b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873.scope.
Dec  7 05:10:54 np0005549474 podman[268809]: 2025-12-07 10:10:53.924156154 +0000 UTC m=+0.023860260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:54 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9a5ef202415fb6fc7e3db65e508a044077f81ea75d28501caeecadb17d3657/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9a5ef202415fb6fc7e3db65e508a044077f81ea75d28501caeecadb17d3657/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9a5ef202415fb6fc7e3db65e508a044077f81ea75d28501caeecadb17d3657/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9a5ef202415fb6fc7e3db65e508a044077f81ea75d28501caeecadb17d3657/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9a5ef202415fb6fc7e3db65e508a044077f81ea75d28501caeecadb17d3657/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:54 np0005549474 podman[268809]: 2025-12-07 10:10:54.058102114 +0000 UTC m=+0.157806230 container init b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:10:54 np0005549474 podman[268809]: 2025-12-07 10:10:54.078224432 +0000 UTC m=+0.177928548 container start b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 05:10:54 np0005549474 podman[268809]: 2025-12-07 10:10:54.081643206 +0000 UTC m=+0.181347312 container attach b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:10:54 np0005549474 stoic_jang[268826]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:10:54 np0005549474 stoic_jang[268826]: --> All data devices are unavailable
Dec  7 05:10:54 np0005549474 systemd[1]: libpod-b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873.scope: Deactivated successfully.
Dec  7 05:10:54 np0005549474 podman[268809]: 2025-12-07 10:10:54.482631401 +0000 UTC m=+0.582335537 container died b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:10:54 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1a9a5ef202415fb6fc7e3db65e508a044077f81ea75d28501caeecadb17d3657-merged.mount: Deactivated successfully.
Dec  7 05:10:54 np0005549474 podman[268809]: 2025-12-07 10:10:54.54351943 +0000 UTC m=+0.643223516 container remove b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 05:10:54 np0005549474 systemd[1]: libpod-conmon-b2e866754cdf872f73df130d2fc2b7f7e85a2c89729c19a1193519a70184f873.scope: Deactivated successfully.
Dec  7 05:10:54 np0005549474 podman[268842]: 2025-12-07 10:10:54.604052809 +0000 UTC m=+0.080248287 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  7 05:10:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:54 np0005549474 podman[268850]: 2025-12-07 10:10:54.678290603 +0000 UTC m=+0.154364508 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:10:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:54.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:54 np0005549474 nova_compute[256753]: 2025-12-07 10:10:54.986 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.234901988 +0000 UTC m=+0.046194769 container create d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:10:55 np0005549474 systemd[1]: Started libpod-conmon-d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75.scope.
Dec  7 05:10:55 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.302731496 +0000 UTC m=+0.114024297 container init d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.210127873 +0000 UTC m=+0.021420744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.308939366 +0000 UTC m=+0.120232147 container start d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.312185614 +0000 UTC m=+0.123478415 container attach d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_banzai, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:10:55 np0005549474 systemd[1]: libpod-d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75.scope: Deactivated successfully.
Dec  7 05:10:55 np0005549474 hungry_banzai[269002]: 167 167
Dec  7 05:10:55 np0005549474 conmon[269002]: conmon d7c8ef9d641dddb1da7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75.scope/container/memory.events
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.315773661 +0000 UTC m=+0.127066502 container died d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 05:10:55 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e7ce33eb0ec09a4d1ebffd4b7fc311920cb0cfb1468034de2d2cfe064dab3daf-merged.mount: Deactivated successfully.
Dec  7 05:10:55 np0005549474 podman[268986]: 2025-12-07 10:10:55.355091053 +0000 UTC m=+0.166383834 container remove d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 05:10:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:55 np0005549474 systemd[1]: libpod-conmon-d7c8ef9d641dddb1da7b99e7949d6b74a07dbf8ebfa235dc04ee0a17ee5e6c75.scope: Deactivated successfully.
Dec  7 05:10:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:55.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.529963698 +0000 UTC m=+0.052123342 container create 38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:10:55 np0005549474 systemd[1]: Started libpod-conmon-38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf.scope.
Dec  7 05:10:55 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdc6d2adc77ab9792211b2db924608ba0c405a43ca4d8da3f6ac289205f4446/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdc6d2adc77ab9792211b2db924608ba0c405a43ca4d8da3f6ac289205f4446/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdc6d2adc77ab9792211b2db924608ba0c405a43ca4d8da3f6ac289205f4446/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:55 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdc6d2adc77ab9792211b2db924608ba0c405a43ca4d8da3f6ac289205f4446/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.605612459 +0000 UTC m=+0.127772113 container init 38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.514819695 +0000 UTC m=+0.036979359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.618862239 +0000 UTC m=+0.141021893 container start 38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.623234679 +0000 UTC m=+0.145394323 container attach 38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]: {
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:    "0": [
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:        {
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "devices": [
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "/dev/loop3"
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            ],
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "lv_name": "ceph_lv0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "lv_size": "21470642176",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "name": "ceph_lv0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "tags": {
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.cluster_name": "ceph",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.crush_device_class": "",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.encrypted": "0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.osd_id": "0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.type": "block",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.vdo": "0",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:                "ceph.with_tpm": "0"
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            },
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "type": "block",
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:            "vg_name": "ceph_vg0"
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:        }
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]:    ]
Dec  7 05:10:55 np0005549474 magical_satoshi[269045]: }
Dec  7 05:10:55 np0005549474 systemd[1]: libpod-38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf.scope: Deactivated successfully.
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.918792071 +0000 UTC m=+0.440951715 container died 38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:55 np0005549474 systemd[1]: var-lib-containers-storage-overlay-bbdc6d2adc77ab9792211b2db924608ba0c405a43ca4d8da3f6ac289205f4446-merged.mount: Deactivated successfully.
Dec  7 05:10:55 np0005549474 podman[269028]: 2025-12-07 10:10:55.956692645 +0000 UTC m=+0.478852289 container remove 38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 05:10:55 np0005549474 systemd[1]: libpod-conmon-38ac4bf0edbbebd0cc0a6b68a46ed2f9a5d1d40249d40e9f83c961e59252d6bf.scope: Deactivated successfully.
Dec  7 05:10:56 np0005549474 nova_compute[256753]: 2025-12-07 10:10:56.556 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.568009201 +0000 UTC m=+0.079829716 container create 44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.51292904 +0000 UTC m=+0.024749595 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:56 np0005549474 systemd[1]: Started libpod-conmon-44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be.scope.
Dec  7 05:10:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.657466188 +0000 UTC m=+0.169286703 container init 44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.663117242 +0000 UTC m=+0.174937757 container start 44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.665881228 +0000 UTC m=+0.177701753 container attach 44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 05:10:56 np0005549474 sad_raman[269176]: 167 167
Dec  7 05:10:56 np0005549474 systemd[1]: libpod-44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be.scope: Deactivated successfully.
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.667442251 +0000 UTC m=+0.179262766 container died 44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 05:10:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f9475fe9ccd73679d3c931ba3f8509b3732de677b41f2495498233137ef159c9-merged.mount: Deactivated successfully.
Dec  7 05:10:56 np0005549474 podman[269160]: 2025-12-07 10:10:56.704388547 +0000 UTC m=+0.216209052 container remove 44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_raman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:10:56 np0005549474 systemd[1]: libpod-conmon-44c384fda2525f88b2d3ba9568993c5eaf77c4f32e5146c67b92df203bae61be.scope: Deactivated successfully.
Dec  7 05:10:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:10:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:56.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:10:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:56 np0005549474 podman[269202]: 2025-12-07 10:10:56.896037668 +0000 UTC m=+0.039145437 container create 061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 05:10:56 np0005549474 systemd[1]: Started libpod-conmon-061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b.scope.
Dec  7 05:10:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:10:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75845da82d2e257f64b039731c4f3e3d9df238b4a48d6896ff13a129495a3c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75845da82d2e257f64b039731c4f3e3d9df238b4a48d6896ff13a129495a3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75845da82d2e257f64b039731c4f3e3d9df238b4a48d6896ff13a129495a3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75845da82d2e257f64b039731c4f3e3d9df238b4a48d6896ff13a129495a3c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:10:56 np0005549474 podman[269202]: 2025-12-07 10:10:56.876765723 +0000 UTC m=+0.019873452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:10:56 np0005549474 podman[269202]: 2025-12-07 10:10:56.9775928 +0000 UTC m=+0.120700549 container init 061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 05:10:56 np0005549474 podman[269202]: 2025-12-07 10:10:56.986111763 +0000 UTC m=+0.129219522 container start 061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 05:10:56 np0005549474 podman[269202]: 2025-12-07 10:10:56.989699001 +0000 UTC m=+0.132806770 container attach 061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 05:10:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 KiB/s wr, 74 op/s
Dec  7 05:10:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:57.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:10:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:57.142Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:10:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:10:57.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:10:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:10:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:57.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:57 np0005549474 lvm[269294]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:10:57 np0005549474 lvm[269294]: VG ceph_vg0 finished
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:10:57 np0005549474 exciting_wu[269218]: {}
Dec  7 05:10:57 np0005549474 podman[269202]: 2025-12-07 10:10:57.733964749 +0000 UTC m=+0.877072488 container died 061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:10:57 np0005549474 systemd[1]: libpod-061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b.scope: Deactivated successfully.
Dec  7 05:10:57 np0005549474 systemd[1]: libpod-061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b.scope: Consumed 1.262s CPU time.
Dec  7 05:10:57 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b75845da82d2e257f64b039731c4f3e3d9df238b4a48d6896ff13a129495a3c9-merged.mount: Deactivated successfully.
Dec  7 05:10:57 np0005549474 podman[269202]: 2025-12-07 10:10:57.788325941 +0000 UTC m=+0.931433700 container remove 061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wu, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:10:57 np0005549474 systemd[1]: libpod-conmon-061d5b69ceaf243af42e5e35d58d75cf3f87e5f81316da7816aa4cd0b137159b.scope: Deactivated successfully.
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:10:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:58 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:10:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:58.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 191 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Dec  7 05:10:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:10:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:10:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:10:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:10:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:59.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:10:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:59] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:10:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:10:59] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:11:00 np0005549474 nova_compute[256753]: 2025-12-07 10:11:00.019 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640020f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:00.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 593 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec  7 05:11:01 np0005549474 podman[269338]: 2025-12-07 10:11:01.254612906 +0000 UTC m=+0.068085896 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  7 05:11:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:01.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:01 np0005549474 nova_compute[256753]: 2025-12-07 10:11:01.559 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:11:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3243843515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:11:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:11:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3243843515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:11:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:11:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:03.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:05 np0005549474 nova_compute[256753]: 2025-12-07 10:11:05.062 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:11:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:05 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:05.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:06 np0005549474 nova_compute[256753]: 2025-12-07 10:11:06.602 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:06.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:11:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:07.144Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:11:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:07.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:11:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:07 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 139 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Dec  7 05:11:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:09.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:09] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:11:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:09] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:11:10 np0005549474 nova_compute[256753]: 2025-12-07 10:11:10.096 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:10.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.085 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.086 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.086 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.087 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.087 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.089 256757 INFO nova.compute.manager [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Terminating instance#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.091 256757 DEBUG nova.compute.manager [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  7 05:11:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 473 KiB/s wr, 49 op/s
Dec  7 05:11:11 np0005549474 kernel: tapd7188451-df (unregistering): left promiscuous mode
Dec  7 05:11:11 np0005549474 NetworkManager[49051]: <info>  [1765102271.1663] device (tapd7188451-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:11:11 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:11Z|00046|binding|INFO|Releasing lport d7188451-df6a-4332-8055-1f51cc58facf from this chassis (sb_readonly=0)
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.174 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:11Z|00047|binding|INFO|Setting lport d7188451-df6a-4332-8055-1f51cc58facf down in Southbound
Dec  7 05:11:11 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:11Z|00048|binding|INFO|Removing iface tapd7188451-df ovn-installed in OVS
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.181 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:bd:b8 10.100.0.7'], port_security=['fa:16:3e:f5:bd:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f9232f75-55c6-4982-8757-b2f3408b0ca4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '47f93f67-6ce6-4959-9f55-c050bd0e7857', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0da5444-1ae1-4fbc-98ae-e56ff57f59da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=d7188451-df6a-4332-8055-1f51cc58facf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.183 164143 INFO neutron.agent.ovn.metadata.agent [-] Port d7188451-df6a-4332-8055-1f51cc58facf in datapath e688201f-cd34-4e2e-8b69-c5b50ad0046c unbound from our chassis#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.184 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e688201f-cd34-4e2e-8b69-c5b50ad0046c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.186 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[528533bd-73e4-48f3-b6f3-b6327910eded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.186 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c namespace which is not needed anymore#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.211 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  7 05:11:11 np0005549474 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 15.380s CPU time.
Dec  7 05:11:11 np0005549474 systemd-machined[217882]: Machine qemu-2-instance-00000004 terminated.
Dec  7 05:11:11 np0005549474 NetworkManager[49051]: <info>  [1765102271.3151] manager: (tapd7188451-df): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.317 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.324 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.336 256757 INFO nova.virt.libvirt.driver [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Instance destroyed successfully.#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.337 256757 DEBUG nova.objects.instance [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'resources' on Instance uuid f9232f75-55c6-4982-8757-b2f3408b0ca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.353 256757 DEBUG nova.virt.libvirt.vif [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:09:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1192553867',display_name='tempest-TestNetworkBasicOps-server-1192553867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1192553867',id=4,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJPbS9aTbpy0X69C6m9JxdIrMBThePaZ9vqkS8QNE9/nY+zf5HOp8p3l9Geo7CIg7rz/Daes3m6cu2P4mTFia9frX4nXNnutbFgH8nFazNzjNquy/TGVPPZ31oy0Xas0rw==',key_name='tempest-TestNetworkBasicOps-707292887',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:10:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-q05txiox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:10:08Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=f9232f75-55c6-4982-8757-b2f3408b0ca4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.355 256757 DEBUG nova.network.os_vif_util [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "d7188451-df6a-4332-8055-1f51cc58facf", "address": "fa:16:3e:f5:bd:b8", "network": {"id": "e688201f-cd34-4e2e-8b69-c5b50ad0046c", "bridge": "br-int", "label": "tempest-network-smoke--157315707", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7188451-df", "ovs_interfaceid": "d7188451-df6a-4332-8055-1f51cc58facf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.356 256757 DEBUG nova.network.os_vif_util [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.357 256757 DEBUG os_vif [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.360 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.361 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7188451-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:11 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.367 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [NOTICE]   (266785) : haproxy version is 2.8.14-c23fe91
Dec  7 05:11:11 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [NOTICE]   (266785) : path to executable is /usr/sbin/haproxy
Dec  7 05:11:11 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [WARNING]  (266785) : Exiting Master process...
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.370 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [ALERT]    (266785) : Current worker (266787) exited with code 143 (Terminated)
Dec  7 05:11:11 np0005549474 neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c[266779]: [WARNING]  (266785) : All workers exited. Exiting... (0)
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.373 256757 INFO os_vif [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:bd:b8,bridge_name='br-int',has_traffic_filtering=True,id=d7188451-df6a-4332-8055-1f51cc58facf,network=Network(e688201f-cd34-4e2e-8b69-c5b50ad0046c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7188451-df')#033[00m
Dec  7 05:11:11 np0005549474 systemd[1]: libpod-60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342.scope: Deactivated successfully.
Dec  7 05:11:11 np0005549474 podman[269422]: 2025-12-07 10:11:11.380362371 +0000 UTC m=+0.055033741 container died 60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.397 256757 DEBUG nova.compute.manager [req-94fbc76f-653e-476a-a833-2cf46facd451 req-3822a303-6b83-4b6b-b81c-4ff2917141d4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-vif-unplugged-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.397 256757 DEBUG oslo_concurrency.lockutils [req-94fbc76f-653e-476a-a833-2cf46facd451 req-3822a303-6b83-4b6b-b81c-4ff2917141d4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.398 256757 DEBUG oslo_concurrency.lockutils [req-94fbc76f-653e-476a-a833-2cf46facd451 req-3822a303-6b83-4b6b-b81c-4ff2917141d4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.398 256757 DEBUG oslo_concurrency.lockutils [req-94fbc76f-653e-476a-a833-2cf46facd451 req-3822a303-6b83-4b6b-b81c-4ff2917141d4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.398 256757 DEBUG nova.compute.manager [req-94fbc76f-653e-476a-a833-2cf46facd451 req-3822a303-6b83-4b6b-b81c-4ff2917141d4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] No waiting events found dispatching network-vif-unplugged-d7188451-df6a-4332-8055-1f51cc58facf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.399 256757 DEBUG nova.compute.manager [req-94fbc76f-653e-476a-a833-2cf46facd451 req-3822a303-6b83-4b6b-b81c-4ff2917141d4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-vif-unplugged-d7188451-df6a-4332-8055-1f51cc58facf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  7 05:11:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342-userdata-shm.mount: Deactivated successfully.
Dec  7 05:11:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e1d10d48703cb8cf9180a4f20d1306e7ce7c1b5c95aa05e07248cd2c5d851dbd-merged.mount: Deactivated successfully.
Dec  7 05:11:11 np0005549474 podman[269422]: 2025-12-07 10:11:11.416351611 +0000 UTC m=+0.091022951 container cleanup 60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  7 05:11:11 np0005549474 systemd[1]: libpod-conmon-60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342.scope: Deactivated successfully.
Dec  7 05:11:11 np0005549474 podman[269474]: 2025-12-07 10:11:11.491301353 +0000 UTC m=+0.045268365 container remove 60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.498 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[badab605-ac3d-429e-ae36-94521da90b31]: (4, ('Sun Dec  7 10:11:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c (60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342)\n60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342\nSun Dec  7 10:11:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c (60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342)\n60e816b28c97ea140d4e724b43b43bb9f8199c6136f3ba3d7f7f3f06a3bed342\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.499 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[c9550ee8-ef7b-422f-9b56-ba800d83f22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.500 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape688201f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:11 np0005549474 kernel: tape688201f-c0: left promiscuous mode
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.502 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.516 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.520 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[b4066060-9b50-439b-bd7c-3e335b5b78a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.539 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[463fd2c0-6af1-4000-9720-b0bb13dae1ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.540 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc84411-43f2-4db2-abbd-3f755c9d12b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.558 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[e087868a-f7e8-4c83-bfb6-7f538a1a0e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417058, 'reachable_time': 44195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269490, 'error': None, 'target': 'ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 systemd[1]: run-netns-ovnmeta\x2de688201f\x2dcd34\x2d4e2e\x2d8b69\x2dc5b50ad0046c.mount: Deactivated successfully.
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.563 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e688201f-cd34-4e2e-8b69-c5b50ad0046c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:11:11 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:11.563 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ed12a4-000a-4b15-8b9e-a943dbeb494d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.794 256757 INFO nova.virt.libvirt.driver [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Deleting instance files /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4_del#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.795 256757 INFO nova.virt.libvirt.driver [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Deletion of /var/lib/nova/instances/f9232f75-55c6-4982-8757-b2f3408b0ca4_del complete#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.895 256757 INFO nova.compute.manager [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.895 256757 DEBUG oslo.service.loopingcall [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.896 256757 DEBUG nova.compute.manager [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  7 05:11:11 np0005549474 nova_compute[256753]: 2025-12-07 10:11:11.896 256757 DEBUG nova.network.neutron [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  7 05:11:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:11:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:11:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:11:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:11:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:11:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:11:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:11:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:11:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:12.574 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.575 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:12.576 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:11:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:12.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.831 256757 DEBUG nova.network.neutron [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.859 256757 INFO nova.compute.manager [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Took 0.96 seconds to deallocate network for instance.#033[00m
Dec  7 05:11:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580004500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.902 256757 DEBUG nova.compute.manager [req-79de2114-28b4-4661-8526-7f1884443fff req-456cfe37-32cc-4249-9ee1-39742ddc71ff ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-vif-deleted-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.931 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.932 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:12 np0005549474 nova_compute[256753]: 2025-12-07 10:11:12.987 256757 DEBUG oslo_concurrency.processutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Dec  7 05:11:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:13 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:11:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328962706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.454 256757 DEBUG oslo_concurrency.processutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.462 256757 DEBUG nova.compute.provider_tree [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.484 256757 DEBUG nova.compute.manager [req-ac0d9af3-9cb3-47bd-b013-f2a1ed6397fa req-5ec67e2b-014e-4768-87ca-17db78e8484e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.485 256757 DEBUG oslo_concurrency.lockutils [req-ac0d9af3-9cb3-47bd-b013-f2a1ed6397fa req-5ec67e2b-014e-4768-87ca-17db78e8484e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.485 256757 DEBUG oslo_concurrency.lockutils [req-ac0d9af3-9cb3-47bd-b013-f2a1ed6397fa req-5ec67e2b-014e-4768-87ca-17db78e8484e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.486 256757 DEBUG oslo_concurrency.lockutils [req-ac0d9af3-9cb3-47bd-b013-f2a1ed6397fa req-5ec67e2b-014e-4768-87ca-17db78e8484e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.486 256757 DEBUG nova.compute.manager [req-ac0d9af3-9cb3-47bd-b013-f2a1ed6397fa req-5ec67e2b-014e-4768-87ca-17db78e8484e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] No waiting events found dispatching network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.487 256757 WARNING nova.compute.manager [req-ac0d9af3-9cb3-47bd-b013-f2a1ed6397fa req-5ec67e2b-014e-4768-87ca-17db78e8484e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Received unexpected event network-vif-plugged-d7188451-df6a-4332-8055-1f51cc58facf for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.489 256757 DEBUG nova.scheduler.client.report [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:11:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:11:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.522 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.561 256757 INFO nova.scheduler.client.report [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Deleted allocations for instance f9232f75-55c6-4982-8757-b2f3408b0ca4#033[00m
Dec  7 05:11:13 np0005549474 nova_compute[256753]: 2025-12-07 10:11:13.667 256757 DEBUG oslo_concurrency.lockutils [None req-d68217d9-2375-48d3-affc-51ea8d1b0ee0 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "f9232f75-55c6-4982-8757-b2f3408b0ca4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:14.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Dec  7 05:11:15 np0005549474 nova_compute[256753]: 2025-12-07 10:11:15.100 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:16 np0005549474 nova_compute[256753]: 2025-12-07 10:11:16.368 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:16 np0005549474 nova_compute[256753]: 2025-12-07 10:11:16.698 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:16 np0005549474 nova_compute[256753]: 2025-12-07 10:11:16.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:16 np0005549474 nova_compute[256753]: 2025-12-07 10:11:16.763 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec  7 05:11:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:17.145Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:11:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:17.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:11:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:17 np0005549474 nova_compute[256753]: 2025-12-07 10:11:17.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:17 np0005549474 nova_compute[256753]: 2025-12-07 10:11:17.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:17 np0005549474 nova_compute[256753]: 2025-12-07 10:11:17.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:11:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:18.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec  7 05:11:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:19.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.910 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.910 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.911 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.911 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:11:19 np0005549474 nova_compute[256753]: 2025-12-07 10:11:19.912 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:19] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:11:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:19] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.102 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:11:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73554600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.398 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.545 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.546 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4573MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.546 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.547 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.601 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.601 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:11:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:20 np0005549474 nova_compute[256753]: 2025-12-07 10:11:20.719 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  7 05:11:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:11:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576171414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:11:21 np0005549474 nova_compute[256753]: 2025-12-07 10:11:21.190 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:21 np0005549474 nova_compute[256753]: 2025-12-07 10:11:21.199 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:11:21 np0005549474 nova_compute[256753]: 2025-12-07 10:11:21.352 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:11:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:21 np0005549474 nova_compute[256753]: 2025-12-07 10:11:21.372 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:21.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:21.578 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:21 np0005549474 nova_compute[256753]: 2025-12-07 10:11:21.644 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:11:21 np0005549474 nova_compute[256753]: 2025-12-07 10:11:21.645 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:22 np0005549474 nova_compute[256753]: 2025-12-07 10:11:22.641 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:22 np0005549474 nova_compute[256753]: 2025-12-07 10:11:22.642 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:22 np0005549474 nova_compute[256753]: 2025-12-07 10:11:22.642 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:11:22 np0005549474 nova_compute[256753]: 2025-12-07 10:11:22.642 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:11:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:22 np0005549474 nova_compute[256753]: 2025-12-07 10:11:22.731 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:11:22 np0005549474 nova_compute[256753]: 2025-12-07 10:11:22.732 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:11:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  7 05:11:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  7 05:11:25 np0005549474 nova_compute[256753]: 2025-12-07 10:11:25.149 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:25 np0005549474 podman[269601]: 2025-12-07 10:11:25.275668289 +0000 UTC m=+0.073707249 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  7 05:11:25 np0005549474 podman[269602]: 2025-12-07 10:11:25.315150494 +0000 UTC m=+0.117668587 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  7 05:11:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:26 np0005549474 nova_compute[256753]: 2025-12-07 10:11:26.330 256757 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765102271.328854, f9232f75-55c6-4982-8757-b2f3408b0ca4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:11:26 np0005549474 nova_compute[256753]: 2025-12-07 10:11:26.331 256757 INFO nova.compute.manager [-] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] VM Stopped (Lifecycle Event)#033[00m
Dec  7 05:11:26 np0005549474 nova_compute[256753]: 2025-12-07 10:11:26.350 256757 DEBUG nova.compute.manager [None req-e4ca1231-06e8-4301-96e1-225688e6793d - - - - - -] [instance: f9232f75-55c6-4982-8757-b2f3408b0ca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:11:26 np0005549474 nova_compute[256753]: 2025-12-07 10:11:26.413 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:11:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:27.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:11:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:11:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:11:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:28.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:11:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:29] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:11:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:29] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:11:30 np0005549474 nova_compute[256753]: 2025-12-07 10:11:30.186 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:11:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:31 np0005549474 nova_compute[256753]: 2025-12-07 10:11:31.450 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:31.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:32 np0005549474 podman[269657]: 2025-12-07 10:11:32.278655167 +0000 UTC m=+0.086243291 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:11:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:11:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:33.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:34.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  7 05:11:35 np0005549474 nova_compute[256753]: 2025-12-07 10:11:35.219 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:35.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.139 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.140 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.158 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.245 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.246 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.257 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.258 256757 INFO nova.compute.claims [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.398 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.495 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:11:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325293765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.876 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.884 256757 DEBUG nova.compute.provider_tree [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:11:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.903 256757 DEBUG nova.scheduler.client.report [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.922 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.923 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.968 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.968 256757 DEBUG nova.network.neutron [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:11:36 np0005549474 nova_compute[256753]: 2025-12-07 10:11:36.987 256757 INFO nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.014 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  7 05:11:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.139 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.140 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.141 256757 INFO nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Creating image(s)#033[00m
Dec  7 05:11:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:37.147Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:11:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:37.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.180 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.226 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.265 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.270 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.341 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.342 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.343 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.344 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.373 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.376 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b daa0d61c-ce51-4a65-82e0-106c2654ed92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.400 256757 DEBUG nova.policy [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:11:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.003000081s ======
Dec  7 05:11:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.668 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b daa0d61c-ce51-4a65-82e0-106c2654ed92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.741 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] resizing rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.861 256757 DEBUG nova.objects.instance [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'migration_context' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.878 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.878 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Ensure instance console log exists: /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.879 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.879 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:37 np0005549474 nova_compute[256753]: 2025-12-07 10:11:37.880 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:38 np0005549474 nova_compute[256753]: 2025-12-07 10:11:38.464 256757 DEBUG nova.network.neutron [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Successfully created port: 4109af21-a3da-49b5-8481-432b45bf7ea9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:11:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:38.622 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:38.622 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:38.623 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:38.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 76 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec  7 05:11:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.626 256757 DEBUG nova.network.neutron [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Successfully updated port: 4109af21-a3da-49b5-8481-432b45bf7ea9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.644 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.645 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.645 256757 DEBUG nova.network.neutron [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.773 256757 DEBUG nova.compute.manager [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-changed-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.773 256757 DEBUG nova.compute.manager [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing instance network info cache due to event network-changed-4109af21-a3da-49b5-8481-432b45bf7ea9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.774 256757 DEBUG oslo_concurrency.lockutils [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:11:39 np0005549474 nova_compute[256753]: 2025-12-07 10:11:39.831 256757 DEBUG nova.network.neutron [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:11:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:39] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:11:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:39] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.222 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.812 256757 DEBUG nova.network.neutron [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.831 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.831 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Instance network_info: |[{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.833 256757 DEBUG oslo_concurrency.lockutils [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.833 256757 DEBUG nova.network.neutron [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing network info cache for port 4109af21-a3da-49b5-8481-432b45bf7ea9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.839 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Start _get_guest_xml network_info=[{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': 'af7b5730-2fa9-449f-8ccb-a9519582f1b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.846 256757 WARNING nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:11:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:11:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.857 256757 DEBUG nova.virt.libvirt.host [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.858 256757 DEBUG nova.virt.libvirt.host [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.863 256757 DEBUG nova.virt.libvirt.host [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.864 256757 DEBUG nova.virt.libvirt.host [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.864 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.865 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-07T10:06:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bc1a767b-c985-4370-b41e-5cb294d603d7',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.866 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.866 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.866 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.867 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.868 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.868 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.869 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.869 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.870 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.870 256757 DEBUG nova.virt.hardware [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  7 05:11:40 np0005549474 nova_compute[256753]: 2025-12-07 10:11:40.875 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:11:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:11:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905991167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.374 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640043b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.416 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.422 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.499 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:11:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548017098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.848 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.851 256757 DEBUG nova.virt.libvirt.vif [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:11:37Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.851 256757 DEBUG nova.network.os_vif_util [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.853 256757 DEBUG nova.network.os_vif_util [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.855 256757 DEBUG nova.objects.instance [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.871 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] End _get_guest_xml xml=<domain type="kvm">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <uuid>daa0d61c-ce51-4a65-82e0-106c2654ed92</uuid>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <name>instance-00000006</name>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <memory>131072</memory>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <vcpu>1</vcpu>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:creationTime>2025-12-07 10:11:40</nova:creationTime>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:flavor name="m1.nano">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:memory>128</nova:memory>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:disk>1</nova:disk>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:swap>0</nova:swap>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:vcpus>1</nova:vcpus>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </nova:flavor>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:owner>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </nova:owner>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <nova:ports>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        </nova:port>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </nova:ports>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </nova:instance>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <sysinfo type="smbios">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <entry name="manufacturer">RDO</entry>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <entry name="product">OpenStack Compute</entry>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <entry name="serial">daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <entry name="uuid">daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <entry name="family">Virtual Machine</entry>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <boot dev="hd"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <smbios mode="sysinfo"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <vmcoreinfo/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <clock offset="utc">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <timer name="pit" tickpolicy="delay"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <timer name="hpet" present="no"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <cpu mode="host-model" match="exact">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <topology sockets="1" cores="1" threads="1"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <disk type="network" device="disk">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <target dev="vda" bus="virtio"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <disk type="network" device="cdrom">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <target dev="sda" bus="sata"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <interface type="ethernet">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <mac address="fa:16:3e:8c:0c:76"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <mtu size="1442"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <target dev="tap4109af21-a3"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <serial type="pty">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <log file="/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log" append="off"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <input type="tablet" bus="usb"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <rng model="virtio">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <backend model="random">/dev/urandom</backend>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <controller type="usb" index="0"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    <memballoon model="virtio">
Dec  7 05:11:41 np0005549474 nova_compute[256753]:      <stats period="10"/>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:11:41 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:11:41 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:11:41 np0005549474 nova_compute[256753]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.873 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Preparing to wait for external event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.874 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.874 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.874 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.875 256757 DEBUG nova.virt.libvirt.vif [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:11:37Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.875 256757 DEBUG nova.network.os_vif_util [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.876 256757 DEBUG nova.network.os_vif_util [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.876 256757 DEBUG os_vif [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.876 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.877 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.877 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.879 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.879 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4109af21-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.880 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4109af21-a3, col_values=(('external_ids', {'iface-id': '4109af21-a3da-49b5-8481-432b45bf7ea9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:0c:76', 'vm-uuid': 'daa0d61c-ce51-4a65-82e0-106c2654ed92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:41 np0005549474 NetworkManager[49051]: <info>  [1765102301.8820] manager: (tap4109af21-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.883 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.889 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.890 256757 INFO os_vif [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3')#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.933 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.933 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.934 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:8c:0c:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.934 256757 INFO nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Using config drive#033[00m
Dec  7 05:11:41 np0005549474 nova_compute[256753]: 2025-12-07 10:11:41.961 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.374 256757 DEBUG nova.network.neutron [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updated VIF entry in instance network info cache for port 4109af21-a3da-49b5-8481-432b45bf7ea9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.375 256757 DEBUG nova.network.neutron [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.407 256757 DEBUG oslo_concurrency.lockutils [req-4e5a6784-652a-4d7d-a362-5880e9eb2a74 req-b28bf171-1380-45e8-9b2e-63d8460ff28e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:11:42
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', '.nfs']
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:11:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:11:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.530 256757 INFO nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Creating config drive at /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/disk.config#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.541 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt3i2qqj3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:11:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.677 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt3i2qqj3" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.710 256757 DEBUG nova.storage.rbd_utils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.714 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/disk.config daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:11:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:11:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:11:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:42.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.899 256757 DEBUG oslo_concurrency.processutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/disk.config daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.901 256757 INFO nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Deleting local config drive /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/disk.config because it was imported into RBD.#033[00m
Dec  7 05:11:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:42 np0005549474 kernel: tap4109af21-a3: entered promiscuous mode
Dec  7 05:11:42 np0005549474 NetworkManager[49051]: <info>  [1765102302.9691] manager: (tap4109af21-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Dec  7 05:11:42 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:42Z|00049|binding|INFO|Claiming lport 4109af21-a3da-49b5-8481-432b45bf7ea9 for this chassis.
Dec  7 05:11:42 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:42Z|00050|binding|INFO|4109af21-a3da-49b5-8481-432b45bf7ea9: Claiming fa:16:3e:8c:0c:76 10.100.0.13
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.969 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:42 np0005549474 nova_compute[256753]: 2025-12-07 10:11:42.984 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.000 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0c:76 10.100.0.13'], port_security=['fa:16:3e:8c:0c:76 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'daa0d61c-ce51-4a65-82e0-106c2654ed92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8e92cf5-e64a-4378-8f87-c574612f73da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4eafcc0-8a7b-4591-b838-69191e9c889f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172a7e02-4a4a-49c7-ab1a-d93e560044ce, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=4109af21-a3da-49b5-8481-432b45bf7ea9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.003 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 4109af21-a3da-49b5-8481-432b45bf7ea9 in datapath c8e92cf5-e64a-4378-8f87-c574612f73da bound to our chassis#033[00m
Dec  7 05:11:43 np0005549474 systemd-udevd[270009]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.005 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8e92cf5-e64a-4378-8f87-c574612f73da#033[00m
Dec  7 05:11:43 np0005549474 systemd-machined[217882]: New machine qemu-3-instance-00000006.
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.019 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2dd4ec-370f-41f8-bb39-85e71e2235a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.021 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8e92cf5-e1 in ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.023 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8e92cf5-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.023 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[a97d42e9-a928-4669-b014-0ae036334784]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 NetworkManager[49051]: <info>  [1765102303.0253] device (tap4109af21-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.025 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[54f1fca4-1e32-4ded-9bad-e96e0eeddad1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 NetworkManager[49051]: <info>  [1765102303.0271] device (tap4109af21-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:11:43 np0005549474 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.043 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[49218287-5cff-44b6-b664-7f79153b5e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:43Z|00051|binding|INFO|Setting lport 4109af21-a3da-49b5-8481-432b45bf7ea9 ovn-installed in OVS
Dec  7 05:11:43 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:43Z|00052|binding|INFO|Setting lport 4109af21-a3da-49b5-8481-432b45bf7ea9 up in Southbound
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.069 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.078 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[c9800613-ba8e-433d-b661-dcdae9bb0d2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.110 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[89e7876f-6858-46d4-8a85-23252b1ad7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.119 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[74b83913-5255-47e1-903f-0ce15175ef83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 systemd-udevd[270013]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:11:43 np0005549474 NetworkManager[49051]: <info>  [1765102303.1224] manager: (tapc8e92cf5-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.158 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[7a51f657-8224-41bf-a6cb-e4eda504be4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.162 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd90a91-95d7-43bb-a527-88e6cdd798b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 NetworkManager[49051]: <info>  [1765102303.1875] device (tapc8e92cf5-e0): carrier: link connected
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.193 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[6da82eba-f803-47f8-aea1-ec0f2601e3b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.216 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff9b2da-d6fe-4344-8c18-d177abf2bde8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8e92cf5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:71:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426584, 'reachable_time': 15679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270043, 'error': None, 'target': 'ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.240 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[ab41c9f5-44dd-4474-ac3d-df4b26d6a717]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:7107'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426584, 'tstamp': 426584}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270044, 'error': None, 'target': 'ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.267 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[5d33a02d-ac7e-4f3d-ab11-1b08756a3929]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8e92cf5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:71:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426584, 'reachable_time': 15679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270045, 'error': None, 'target': 'ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.307 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[460a48f8-fc56-4b7d-8ccc-09b103f5a214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.382 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcec181-e26b-4dbd-a64b-4236c53d7787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.384 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8e92cf5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.385 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:11:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.386 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8e92cf5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.389 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:43 np0005549474 kernel: tapc8e92cf5-e0: entered promiscuous mode
Dec  7 05:11:43 np0005549474 NetworkManager[49051]: <info>  [1765102303.3907] manager: (tapc8e92cf5-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.391 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.393 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8e92cf5-e0, col_values=(('external_ids', {'iface-id': '5f556ba9-478e-466f-a4d9-dec36f26c0bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.394 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:43 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:43Z|00053|binding|INFO|Releasing lport 5f556ba9-478e-466f-a4d9-dec36f26c0bf from this chassis (sb_readonly=0)
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.425 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.426 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8e92cf5-e64a-4378-8f87-c574612f73da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8e92cf5-e64a-4378-8f87-c574612f73da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.427 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[56abccb7-04e6-47f7-b8ce-8bf2f11c20ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.427 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-c8e92cf5-e64a-4378-8f87-c574612f73da
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/c8e92cf5-e64a-4378-8f87-c574612f73da.pid.haproxy
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID c8e92cf5-e64a-4378-8f87-c574612f73da
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:11:43 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:11:43.428 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da', 'env', 'PROCESS_TAG=haproxy-c8e92cf5-e64a-4378-8f87-c574612f73da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8e92cf5-e64a-4378-8f87-c574612f73da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.441 256757 DEBUG nova.compute.manager [req-d4ad3af2-0d85-4dd9-869b-dcbb0977cd03 req-a2163649-97eb-447c-b352-8df074f20a31 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.442 256757 DEBUG oslo_concurrency.lockutils [req-d4ad3af2-0d85-4dd9-869b-dcbb0977cd03 req-a2163649-97eb-447c-b352-8df074f20a31 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.442 256757 DEBUG oslo_concurrency.lockutils [req-d4ad3af2-0d85-4dd9-869b-dcbb0977cd03 req-a2163649-97eb-447c-b352-8df074f20a31 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.442 256757 DEBUG oslo_concurrency.lockutils [req-d4ad3af2-0d85-4dd9-869b-dcbb0977cd03 req-a2163649-97eb-447c-b352-8df074f20a31 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.442 256757 DEBUG nova.compute.manager [req-d4ad3af2-0d85-4dd9-869b-dcbb0977cd03 req-a2163649-97eb-447c-b352-8df074f20a31 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Processing event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  7 05:11:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:11:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:43.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.606 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102303.6056354, daa0d61c-ce51-4a65-82e0-106c2654ed92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.606 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] VM Started (Lifecycle Event)#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.609 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.613 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.615 256757 INFO nova.virt.libvirt.driver [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Instance spawned successfully.#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.616 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.640 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.645 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.645 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.646 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.646 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.647 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.647 256757 DEBUG nova.virt.libvirt.driver [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.652 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.685 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.685 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102303.6059275, daa0d61c-ce51-4a65-82e0-106c2654ed92 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.685 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] VM Paused (Lifecycle Event)#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.707 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.714 256757 INFO nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Took 6.57 seconds to spawn the instance on the hypervisor.#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.714 256757 DEBUG nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.716 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102303.6118813, daa0d61c-ce51-4a65-82e0-106c2654ed92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.717 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] VM Resumed (Lifecycle Event)#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.752 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.755 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.780 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.804 256757 INFO nova.compute.manager [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Took 7.60 seconds to build instance.#033[00m
Dec  7 05:11:43 np0005549474 nova_compute[256753]: 2025-12-07 10:11:43.822 256757 DEBUG oslo_concurrency.lockutils [None req-1f299870-cd72-4f2e-a2f1-36348d320151 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:43 np0005549474 podman[270146]: 2025-12-07 10:11:43.922504116 +0000 UTC m=+0.078193112 container create b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:11:43 np0005549474 systemd[1]: Started libpod-conmon-b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db.scope.
Dec  7 05:11:43 np0005549474 podman[270146]: 2025-12-07 10:11:43.888418837 +0000 UTC m=+0.044107943 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:11:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:11:44 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d93960dc1aa2b7d86cf4ea276ea4e3a8b1d587e1ee1a9c195105dc57b56cbb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:11:44 np0005549474 podman[270146]: 2025-12-07 10:11:44.026555241 +0000 UTC m=+0.182244317 container init b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 05:11:44 np0005549474 podman[270146]: 2025-12-07 10:11:44.033301005 +0000 UTC m=+0.188990041 container start b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:11:44 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [NOTICE]   (270164) : New worker (270166) forked
Dec  7 05:11:44 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [NOTICE]   (270164) : Loading success.
Dec  7 05:11:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640043d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:44.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec  7 05:11:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588003370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.272 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.526 256757 DEBUG nova.compute.manager [req-df46e754-bb63-4ae8-8e6a-5da66ac680e1 req-37810848-20b3-494a-93ad-a932c2e3f200 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.526 256757 DEBUG oslo_concurrency.lockutils [req-df46e754-bb63-4ae8-8e6a-5da66ac680e1 req-37810848-20b3-494a-93ad-a932c2e3f200 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.527 256757 DEBUG oslo_concurrency.lockutils [req-df46e754-bb63-4ae8-8e6a-5da66ac680e1 req-37810848-20b3-494a-93ad-a932c2e3f200 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.527 256757 DEBUG oslo_concurrency.lockutils [req-df46e754-bb63-4ae8-8e6a-5da66ac680e1 req-37810848-20b3-494a-93ad-a932c2e3f200 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.527 256757 DEBUG nova.compute.manager [req-df46e754-bb63-4ae8-8e6a-5da66ac680e1 req-37810848-20b3-494a-93ad-a932c2e3f200 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:11:45 np0005549474 nova_compute[256753]: 2025-12-07 10:11:45.527 256757 WARNING nova.compute.manager [req-df46e754-bb63-4ae8-8e6a-5da66ac680e1 req-37810848-20b3-494a-93ad-a932c2e3f200 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:11:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:46.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:46 np0005549474 nova_compute[256753]: 2025-12-07 10:11:46.881 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec  7 05:11:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:47.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:11:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:47Z|00054|binding|INFO|Releasing lport 5f556ba9-478e-466f-a4d9-dec36f26c0bf from this chassis (sb_readonly=0)
Dec  7 05:11:47 np0005549474 NetworkManager[49051]: <info>  [1765102307.5287] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  7 05:11:47 np0005549474 NetworkManager[49051]: <info>  [1765102307.5305] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.527 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.560 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:47Z|00055|binding|INFO|Releasing lport 5f556ba9-478e-466f-a4d9-dec36f26c0bf from this chassis (sb_readonly=0)
Dec  7 05:11:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.569 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:47.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.902 256757 DEBUG nova.compute.manager [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-changed-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.903 256757 DEBUG nova.compute.manager [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing instance network info cache due to event network-changed-4109af21-a3da-49b5-8481-432b45bf7ea9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.904 256757 DEBUG oslo_concurrency.lockutils [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.905 256757 DEBUG oslo_concurrency.lockutils [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:11:47 np0005549474 nova_compute[256753]: 2025-12-07 10:11:47.906 256757 DEBUG nova.network.neutron [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing network info cache for port 4109af21-a3da-49b5-8481-432b45bf7ea9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:11:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec  7 05:11:49 np0005549474 nova_compute[256753]: 2025-12-07 10:11:49.374 256757 DEBUG nova.network.neutron [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updated VIF entry in instance network info cache for port 4109af21-a3da-49b5-8481-432b45bf7ea9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:11:49 np0005549474 nova_compute[256753]: 2025-12-07 10:11:49.375 256757 DEBUG nova.network.neutron [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:11:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:49 np0005549474 nova_compute[256753]: 2025-12-07 10:11:49.396 256757 DEBUG oslo_concurrency.lockutils [req-b7756829-5f98-482d-8140-2e92a81f2253 req-25384a37-2a12-4966-a3f7-6f5d400acd11 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:11:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:49] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:11:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:49] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:11:50 np0005549474 nova_compute[256753]: 2025-12-07 10:11:50.302 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 335 KiB/s wr, 87 op/s
Dec  7 05:11:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:51 np0005549474 nova_compute[256753]: 2025-12-07 10:11:51.883 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:52.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  7 05:11:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec  7 05:11:55 np0005549474 nova_compute[256753]: 2025-12-07 10:11:55.304 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:55.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:56 np0005549474 podman[270190]: 2025-12-07 10:11:56.311765815 +0000 UTC m=+0.113407391 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  7 05:11:56 np0005549474 podman[270191]: 2025-12-07 10:11:56.357381748 +0000 UTC m=+0.157130563 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  7 05:11:56 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:56Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:0c:76 10.100.0.13
Dec  7 05:11:56 np0005549474 ovn_controller[154296]: 2025-12-07T10:11:56Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:0c:76 10.100.0.13
Dec  7 05:11:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:11:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:11:56 np0005549474 nova_compute[256753]: 2025-12-07 10:11:56.885 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:11:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 56 op/s
Dec  7 05:11:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:57.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:11:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:11:57.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:11:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:11:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:11:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:11:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:11:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:11:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:11:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:11:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 111 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 78 op/s
Dec  7 05:11:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:11:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:11:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:11:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:11:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:59.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:11:59 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:11:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:59] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:11:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:11:59] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:12:00 np0005549474 nova_compute[256753]: 2025-12-07 10:12:00.306 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.316936402 +0000 UTC m=+0.067956312 container create c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:12:00 np0005549474 systemd[1]: Started libpod-conmon-c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8.scope.
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.284390486 +0000 UTC m=+0.035410446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:12:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.423407633 +0000 UTC m=+0.174427543 container init c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_snyder, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.435715019 +0000 UTC m=+0.186734919 container start c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.439112591 +0000 UTC m=+0.190132481 container attach c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:12:00 np0005549474 confident_snyder[270424]: 167 167
Dec  7 05:12:00 np0005549474 systemd[1]: libpod-c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8.scope: Deactivated successfully.
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.44492507 +0000 UTC m=+0.195944960 container died c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_snyder, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 05:12:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0c5fa7dbb695bc32b698120545af708078c4855a0fbdf757aa3fbfa1c948b04a-merged.mount: Deactivated successfully.
Dec  7 05:12:00 np0005549474 podman[270410]: 2025-12-07 10:12:00.498319994 +0000 UTC m=+0.249339904 container remove c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 05:12:00 np0005549474 systemd[1]: libpod-conmon-c069d71113c07660ecdedeaedf33f870fe18418c125d04657a2a7a26bd79efb8.scope: Deactivated successfully.
Dec  7 05:12:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:00 np0005549474 podman[270450]: 2025-12-07 10:12:00.773989296 +0000 UTC m=+0.074285885 container create 12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:12:00 np0005549474 systemd[1]: Started libpod-conmon-12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe.scope.
Dec  7 05:12:00 np0005549474 podman[270450]: 2025-12-07 10:12:00.743445644 +0000 UTC m=+0.043742283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:12:00 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3899dff8b537784e2042c5135270dea88bb6f5496cc71a5fd824f151135abe04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3899dff8b537784e2042c5135270dea88bb6f5496cc71a5fd824f151135abe04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3899dff8b537784e2042c5135270dea88bb6f5496cc71a5fd824f151135abe04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3899dff8b537784e2042c5135270dea88bb6f5496cc71a5fd824f151135abe04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:00 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3899dff8b537784e2042c5135270dea88bb6f5496cc71a5fd824f151135abe04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:00 np0005549474 podman[270450]: 2025-12-07 10:12:00.899090944 +0000 UTC m=+0.199387603 container init 12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wozniak, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:12:00 np0005549474 podman[270450]: 2025-12-07 10:12:00.915692367 +0000 UTC m=+0.215988956 container start 12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wozniak, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:12:00 np0005549474 podman[270450]: 2025-12-07 10:12:00.920236631 +0000 UTC m=+0.220533260 container attach 12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Dec  7 05:12:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec  7 05:12:01 np0005549474 relaxed_wozniak[270467]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:12:01 np0005549474 relaxed_wozniak[270467]: --> All data devices are unavailable
Dec  7 05:12:01 np0005549474 systemd[1]: libpod-12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe.scope: Deactivated successfully.
Dec  7 05:12:01 np0005549474 podman[270450]: 2025-12-07 10:12:01.295348681 +0000 UTC m=+0.595645270 container died 12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:12:01 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3899dff8b537784e2042c5135270dea88bb6f5496cc71a5fd824f151135abe04-merged.mount: Deactivated successfully.
Dec  7 05:12:01 np0005549474 podman[270450]: 2025-12-07 10:12:01.348119978 +0000 UTC m=+0.648416527 container remove 12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:12:01 np0005549474 systemd[1]: libpod-conmon-12a6a051cad3f2d70b7e1757aac81dd660dae79181c7bfe90ba2619fbf2c36fe.scope: Deactivated successfully.
Dec  7 05:12:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:01.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:01 np0005549474 nova_compute[256753]: 2025-12-07 10:12:01.886 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.11924643 +0000 UTC m=+0.085541702 container create cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:12:02 np0005549474 systemd[1]: Started libpod-conmon-cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b.scope.
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.089060677 +0000 UTC m=+0.055355999 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:12:02 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.232657959 +0000 UTC m=+0.198953271 container init cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.245744786 +0000 UTC m=+0.212040058 container start cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.2495483 +0000 UTC m=+0.215843642 container attach cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 05:12:02 np0005549474 naughty_lederberg[270605]: 167 167
Dec  7 05:12:02 np0005549474 systemd[1]: libpod-cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b.scope: Deactivated successfully.
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.253676883 +0000 UTC m=+0.219972165 container died cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:12:02 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2ba2c2349a438c7a835d8718b0d867f0184f5392e8aa497d57d06c26307fc8cf-merged.mount: Deactivated successfully.
Dec  7 05:12:02 np0005549474 podman[270588]: 2025-12-07 10:12:02.303398847 +0000 UTC m=+0.269694119 container remove cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 05:12:02 np0005549474 systemd[1]: libpod-conmon-cdd8d5e2dd87e20fc549abd558318287008700ce8b1b64e3a118193005c22d6b.scope: Deactivated successfully.
Dec  7 05:12:02 np0005549474 nova_compute[256753]: 2025-12-07 10:12:02.339 256757 INFO nova.compute.manager [None req-d757da63-f6ae-4885-9a0c-79393a03ab07 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Get console output#033[00m
Dec  7 05:12:02 np0005549474 nova_compute[256753]: 2025-12-07 10:12:02.348 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:12:02 np0005549474 podman[270625]: 2025-12-07 10:12:02.470797908 +0000 UTC m=+0.118872069 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec  7 05:12:02 np0005549474 podman[270650]: 2025-12-07 10:12:02.525503609 +0000 UTC m=+0.064358045 container create f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 05:12:02 np0005549474 systemd[1]: Started libpod-conmon-f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0.scope.
Dec  7 05:12:02 np0005549474 podman[270650]: 2025-12-07 10:12:02.50095241 +0000 UTC m=+0.039806926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:12:02 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cd76b55c93c6476e325b9fe3ff98c68964f5c1d12b2b8ba70fc8252ee088c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cd76b55c93c6476e325b9fe3ff98c68964f5c1d12b2b8ba70fc8252ee088c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cd76b55c93c6476e325b9fe3ff98c68964f5c1d12b2b8ba70fc8252ee088c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17cd76b55c93c6476e325b9fe3ff98c68964f5c1d12b2b8ba70fc8252ee088c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:02 np0005549474 podman[270650]: 2025-12-07 10:12:02.641464068 +0000 UTC m=+0.180318534 container init f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 05:12:02 np0005549474 podman[270650]: 2025-12-07 10:12:02.653038853 +0000 UTC m=+0.191893289 container start f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 05:12:02 np0005549474 podman[270650]: 2025-12-07 10:12:02.656847748 +0000 UTC m=+0.195702184 container attach f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 05:12:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]: {
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:    "0": [
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:        {
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "devices": [
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "/dev/loop3"
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            ],
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "lv_name": "ceph_lv0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "lv_size": "21470642176",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "name": "ceph_lv0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "tags": {
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.cluster_name": "ceph",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.crush_device_class": "",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.encrypted": "0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.osd_id": "0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.type": "block",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.vdo": "0",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:                "ceph.with_tpm": "0"
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            },
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "type": "block",
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:            "vg_name": "ceph_vg0"
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:        }
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]:    ]
Dec  7 05:12:02 np0005549474 stupefied_jemison[270669]: }
Dec  7 05:12:02 np0005549474 systemd[1]: libpod-f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0.scope: Deactivated successfully.
Dec  7 05:12:02 np0005549474 podman[270650]: 2025-12-07 10:12:02.984907136 +0000 UTC m=+0.523761592 container died f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 05:12:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f17cd76b55c93c6476e325b9fe3ff98c68964f5c1d12b2b8ba70fc8252ee088c-merged.mount: Deactivated successfully.
Dec  7 05:12:03 np0005549474 podman[270650]: 2025-12-07 10:12:03.03462793 +0000 UTC m=+0.573482366 container remove f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:12:03 np0005549474 systemd[1]: libpod-conmon-f847756ce0598f97219e294337ac1b1445654d85506a9476bec4d333d266f5e0.scope: Deactivated successfully.
Dec  7 05:12:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  7 05:12:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.805020041 +0000 UTC m=+0.080154044 container create a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:12:03 np0005549474 systemd[1]: Started libpod-conmon-a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc.scope.
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.771460698 +0000 UTC m=+0.046594761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:12:03 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.910054103 +0000 UTC m=+0.185188116 container init a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.922738289 +0000 UTC m=+0.197872302 container start a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.927626292 +0000 UTC m=+0.202760305 container attach a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:12:03 np0005549474 exciting_moore[270827]: 167 167
Dec  7 05:12:03 np0005549474 systemd[1]: libpod-a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc.scope: Deactivated successfully.
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.933353888 +0000 UTC m=+0.208487901 container died a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:12:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ae2c3a98eaa6df762c68a81c7e3c432a1a6f043f86c0976b6917a338cfc7899d-merged.mount: Deactivated successfully.
Dec  7 05:12:03 np0005549474 podman[270785]: 2025-12-07 10:12:03.992950142 +0000 UTC m=+0.268084155 container remove a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_moore, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:12:04 np0005549474 systemd[1]: libpod-conmon-a44e7225e4e87d9fb630fe1da068099373657eeb37afd195b94c13480714e9dc.scope: Deactivated successfully.
Dec  7 05:12:04 np0005549474 podman[270851]: 2025-12-07 10:12:04.25530618 +0000 UTC m=+0.063357507 container create be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:12:04 np0005549474 systemd[1]: Started libpod-conmon-be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb.scope.
Dec  7 05:12:04 np0005549474 podman[270851]: 2025-12-07 10:12:04.236135628 +0000 UTC m=+0.044186965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:12:04 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d74adeafe33fe59f38436d7862b98432df6f47c54a7600d53a15bd043c5dce1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d74adeafe33fe59f38436d7862b98432df6f47c54a7600d53a15bd043c5dce1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d74adeafe33fe59f38436d7862b98432df6f47c54a7600d53a15bd043c5dce1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:04 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d74adeafe33fe59f38436d7862b98432df6f47c54a7600d53a15bd043c5dce1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:04 np0005549474 podman[270851]: 2025-12-07 10:12:04.378026324 +0000 UTC m=+0.186077641 container init be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_burnell, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 05:12:04 np0005549474 podman[270851]: 2025-12-07 10:12:04.390067032 +0000 UTC m=+0.198118349 container start be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:12:04 np0005549474 podman[270851]: 2025-12-07 10:12:04.393463815 +0000 UTC m=+0.201515102 container attach be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_burnell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:12:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:05 np0005549474 lvm[270943]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:12:05 np0005549474 lvm[270943]: VG ceph_vg0 finished
Dec  7 05:12:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  7 05:12:05 np0005549474 goofy_burnell[270868]: {}
Dec  7 05:12:05 np0005549474 systemd[1]: libpod-be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb.scope: Deactivated successfully.
Dec  7 05:12:05 np0005549474 podman[270851]: 2025-12-07 10:12:05.21101195 +0000 UTC m=+1.019063237 container died be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_burnell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 05:12:05 np0005549474 systemd[1]: libpod-be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb.scope: Consumed 1.315s CPU time.
Dec  7 05:12:05 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9d74adeafe33fe59f38436d7862b98432df6f47c54a7600d53a15bd043c5dce1-merged.mount: Deactivated successfully.
Dec  7 05:12:05 np0005549474 podman[270851]: 2025-12-07 10:12:05.27412574 +0000 UTC m=+1.082177067 container remove be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:12:05 np0005549474 systemd[1]: libpod-conmon-be79db471a00be742d77838e5550665a897de0ce4216e7dc87d89d32924342cb.scope: Deactivated successfully.
Dec  7 05:12:05 np0005549474 nova_compute[256753]: 2025-12-07 10:12:05.307 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:12:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:12:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:12:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:12:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:05 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:12:06 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:12:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:06 np0005549474 nova_compute[256753]: 2025-12-07 10:12:06.888 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:12:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:07.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:12:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:07 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:07 np0005549474 nova_compute[256753]: 2025-12-07 10:12:07.488 256757 DEBUG oslo_concurrency.lockutils [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "interface-daa0d61c-ce51-4a65-82e0-106c2654ed92-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:07 np0005549474 nova_compute[256753]: 2025-12-07 10:12:07.488 256757 DEBUG oslo_concurrency.lockutils [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "interface-daa0d61c-ce51-4a65-82e0-106c2654ed92-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:07 np0005549474 nova_compute[256753]: 2025-12-07 10:12:07.489 256757 DEBUG nova.objects.instance [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'flavor' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:12:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:07.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:08 np0005549474 nova_compute[256753]: 2025-12-07 10:12:08.384 256757 DEBUG nova.objects.instance [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_requests' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:12:08 np0005549474 nova_compute[256753]: 2025-12-07 10:12:08.401 256757 DEBUG nova.network.neutron [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:12:08 np0005549474 nova_compute[256753]: 2025-12-07 10:12:08.667 256757 DEBUG nova.policy [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:12:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:08.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:12:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:09 np0005549474 nova_compute[256753]: 2025-12-07 10:12:09.442 256757 DEBUG nova.network.neutron [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Successfully created port: fbe265e8-4ccb-490c-b57d-5c1633844053 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:12:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:09.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:09] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:12:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:09] "GET /metrics HTTP/1.1" 200 48401 "" "Prometheus/2.51.0"
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.311 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.530 256757 DEBUG nova.network.neutron [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Successfully updated port: fbe265e8-4ccb-490c-b57d-5c1633844053 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.552 256757 DEBUG oslo_concurrency.lockutils [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.553 256757 DEBUG oslo_concurrency.lockutils [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.553 256757 DEBUG nova.network.neutron [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.662 256757 DEBUG nova.compute.manager [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-changed-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.662 256757 DEBUG nova.compute.manager [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing instance network info cache due to event network-changed-fbe265e8-4ccb-490c-b57d-5c1633844053. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.663 256757 DEBUG oslo_concurrency.lockutils [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:12:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  7 05:12:10 np0005549474 nova_compute[256753]: 2025-12-07 10:12:10.780 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  7 05:12:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 859 KiB/s wr, 40 op/s
Dec  7 05:12:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:11 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:11.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:11 np0005549474 nova_compute[256753]: 2025-12-07 10:12:11.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:11 np0005549474 nova_compute[256753]: 2025-12-07 10:12:11.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  7 05:12:11 np0005549474 nova_compute[256753]: 2025-12-07 10:12:11.889 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:12:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:12:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:12:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.536 256757 DEBUG nova.network.neutron [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:12:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:12:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:12:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:12:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.552 256757 DEBUG oslo_concurrency.lockutils [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.553 256757 DEBUG oslo_concurrency.lockutils [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.553 256757 DEBUG nova.network.neutron [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing network info cache for port fbe265e8-4ccb-490c-b57d-5c1633844053 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.557 256757 DEBUG nova.virt.libvirt.vif [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.558 256757 DEBUG nova.network.os_vif_util [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.559 256757 DEBUG nova.network.os_vif_util [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.559 256757 DEBUG os_vif [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.560 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.560 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.560 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.563 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.563 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbe265e8-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.564 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbe265e8-4c, col_values=(('external_ids', {'iface-id': 'fbe265e8-4ccb-490c-b57d-5c1633844053', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:92:26', 'vm-uuid': 'daa0d61c-ce51-4a65-82e0-106c2654ed92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.565 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.5666] manager: (tapfbe265e8-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.567 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.574 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.575 256757 INFO os_vif [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c')#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.576 256757 DEBUG nova.virt.libvirt.vif [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.577 256757 DEBUG nova.network.os_vif_util [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.577 256757 DEBUG nova.network.os_vif_util [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.581 256757 DEBUG nova.virt.libvirt.guest [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] attach device xml: <interface type="ethernet">
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <mac address="fa:16:3e:39:92:26"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <model type="virtio"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <mtu size="1442"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <target dev="tapfbe265e8-4c"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]: </interface>
Dec  7 05:12:12 np0005549474 nova_compute[256753]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  7 05:12:12 np0005549474 kernel: tapfbe265e8-4c: entered promiscuous mode
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.5968] manager: (tapfbe265e8-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Dec  7 05:12:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:12Z|00056|binding|INFO|Claiming lport fbe265e8-4ccb-490c-b57d-5c1633844053 for this chassis.
Dec  7 05:12:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:12Z|00057|binding|INFO|fbe265e8-4ccb-490c-b57d-5c1633844053: Claiming fa:16:3e:39:92:26 10.100.0.25
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.599 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.610 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:92:26 10.100.0.25'], port_security=['fa:16:3e:39:92:26 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'daa0d61c-ce51-4a65-82e0-106c2654ed92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e50e4dbc-db48-44c0-b801-323654e1b24c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a18a0c1d-edcd-4726-8624-d7535cb9aece', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9e687d9-7923-4eb6-b2b9-ae9c6837acef, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=fbe265e8-4ccb-490c-b57d-5c1633844053) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.612 164143 INFO neutron.agent.ovn.metadata.agent [-] Port fbe265e8-4ccb-490c-b57d-5c1633844053 in datapath e50e4dbc-db48-44c0-b801-323654e1b24c bound to our chassis#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.614 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e50e4dbc-db48-44c0-b801-323654e1b24c#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.628 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[2a46c5a0-83a5-4f3e-b249-435eaf7975c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.628 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape50e4dbc-d1 in ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.630 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape50e4dbc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.630 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[b337b06e-b3f9-4119-8722-de30d1382f49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 systemd-udevd[271001]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.631 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[9c337d1b-da5f-4126-8281-76e645faedc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.6462] device (tapfbe265e8-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.6473] device (tapfbe265e8-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.650 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[7e641bad-8e3b-4c83-8fad-0129b5103158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.678 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[f84ccda3-c181-447e-8556-3a14709941e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.680 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:12Z|00058|binding|INFO|Setting lport fbe265e8-4ccb-490c-b57d-5c1633844053 ovn-installed in OVS
Dec  7 05:12:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:12Z|00059|binding|INFO|Setting lport fbe265e8-4ccb-490c-b57d-5c1633844053 up in Southbound
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.683 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.705 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[7c92d1c4-2b59-4244-aca5-9bad722580dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.710 256757 DEBUG nova.virt.libvirt.driver [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.711 256757 DEBUG nova.virt.libvirt.driver [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.711 256757 DEBUG nova.virt.libvirt.driver [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:8c:0c:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.711 256757 DEBUG nova.virt.libvirt.driver [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:39:92:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:12:12 np0005549474 systemd-udevd[271004]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.712 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0cfe7b-476f-44cc-92ae-dd81cd02d960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.7126] manager: (tape50e4dbc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Dec  7 05:12:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.743 256757 DEBUG nova.virt.libvirt.guest [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:12:12</nova:creationTime>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:12:12 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    <nova:port uuid="fbe265e8-4ccb-490c-b57d-5c1633844053">
Dec  7 05:12:12 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:12:12 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:12:12 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:12:12 np0005549474 nova_compute[256753]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.745 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[33218ec4-c54c-4d0d-9d09-5a972338b81c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.748 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a131d6-cff2-449e-81e9-f11c8c86cb57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.7703] device (tape50e4dbc-d0): carrier: link connected
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.775 256757 DEBUG oslo_concurrency.lockutils [None req-e7a24b73-f90f-4190-bc0c-a42c1b95bce6 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "interface-daa0d61c-ce51-4a65-82e0-106c2654ed92-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.776 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[ad83c121-4387-4cea-be12-4f993dffafeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.793 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[78d0743e-e09c-4fd1-8db9-69b84f29f983]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape50e4dbc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:52:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429543, 'reachable_time': 41348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271028, 'error': None, 'target': 'ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.809 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[d3df6438-fff5-48e3-8d5c-85982793cde6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:5259'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429543, 'tstamp': 429543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271029, 'error': None, 'target': 'ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.830 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[78fc1194-8b8a-4630-bf3e-2ddba6773de4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape50e4dbc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:52:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429543, 'reachable_time': 41348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271030, 'error': None, 'target': 'ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.864 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[d215eab1-0a7d-4714-9fcd-c191e0595eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.927 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[5b13b565-f83f-4da9-b065-c88e54297b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.929 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50e4dbc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.929 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.930 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50e4dbc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:12 np0005549474 NetworkManager[49051]: <info>  [1765102332.9330] manager: (tape50e4dbc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  7 05:12:12 np0005549474 kernel: tape50e4dbc-d0: entered promiscuous mode
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.933 256757 DEBUG nova.compute.manager [req-8c6a5b84-0ae5-4b5e-a8e5-491e2d8418ba req-eb50b97f-aa67-445d-978f-823a21c84d99 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.933 256757 DEBUG oslo_concurrency.lockutils [req-8c6a5b84-0ae5-4b5e-a8e5-491e2d8418ba req-eb50b97f-aa67-445d-978f-823a21c84d99 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.934 256757 DEBUG oslo_concurrency.lockutils [req-8c6a5b84-0ae5-4b5e-a8e5-491e2d8418ba req-eb50b97f-aa67-445d-978f-823a21c84d99 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.935 256757 DEBUG oslo_concurrency.lockutils [req-8c6a5b84-0ae5-4b5e-a8e5-491e2d8418ba req-eb50b97f-aa67-445d-978f-823a21c84d99 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.935 256757 DEBUG nova.compute.manager [req-8c6a5b84-0ae5-4b5e-a8e5-491e2d8418ba req-eb50b97f-aa67-445d-978f-823a21c84d99 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.935 256757 WARNING nova.compute.manager [req-8c6a5b84-0ae5-4b5e-a8e5-491e2d8418ba req-eb50b97f-aa67-445d-978f-823a21c84d99 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.936 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.936 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape50e4dbc-d0, col_values=(('external_ids', {'iface-id': 'dcacf4fb-ee08-4a3a-b858-961dcefef74e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:12 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:12Z|00060|binding|INFO|Releasing lport dcacf4fb-ee08-4a3a-b858-961dcefef74e from this chassis (sb_readonly=0)
Dec  7 05:12:12 np0005549474 nova_compute[256753]: 2025-12-07 10:12:12.965 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.966 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e50e4dbc-db48-44c0-b801-323654e1b24c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e50e4dbc-db48-44c0-b801-323654e1b24c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.966 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[81e69c0d-3125-4f0e-a521-d7a631732ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.967 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-e50e4dbc-db48-44c0-b801-323654e1b24c
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/e50e4dbc-db48-44c0-b801-323654e1b24c.pid.haproxy
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID e50e4dbc-db48-44c0-b801-323654e1b24c
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:12:12 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:12.967 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c', 'env', 'PROCESS_TAG=haproxy-e50e4dbc-db48-44c0-b801-323654e1b24c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e50e4dbc-db48-44c0-b801-323654e1b24c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:12:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 15 KiB/s wr, 0 op/s
Dec  7 05:12:13 np0005549474 nova_compute[256753]: 2025-12-07 10:12:13.167 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:13 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:13.168 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:12:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:13 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:13 np0005549474 podman[271065]: 2025-12-07 10:12:13.437052853 +0000 UTC m=+0.080034882 container create be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  7 05:12:13 np0005549474 systemd[1]: Started libpod-conmon-be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629.scope.
Dec  7 05:12:13 np0005549474 podman[271065]: 2025-12-07 10:12:13.40392216 +0000 UTC m=+0.046904189 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:12:13 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:12:13 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a41bad89a4f1b53554be953966468d557a451730a3ffdf712af00a4138ac17a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:12:13 np0005549474 podman[271065]: 2025-12-07 10:12:13.536159023 +0000 UTC m=+0.179141022 container init be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:12:13 np0005549474 podman[271065]: 2025-12-07 10:12:13.548106238 +0000 UTC m=+0.191088227 container start be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  7 05:12:13 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [NOTICE]   (271084) : New worker (271086) forked
Dec  7 05:12:13 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [NOTICE]   (271084) : Loading success.
Dec  7 05:12:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:13.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:13 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:13.619 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:12:13 np0005549474 nova_compute[256753]: 2025-12-07 10:12:13.690 256757 DEBUG nova.network.neutron [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updated VIF entry in instance network info cache for port fbe265e8-4ccb-490c-b57d-5c1633844053. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:12:13 np0005549474 nova_compute[256753]: 2025-12-07 10:12:13.690 256757 DEBUG nova.network.neutron [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:12:13 np0005549474 nova_compute[256753]: 2025-12-07 10:12:13.705 256757 DEBUG oslo_concurrency.lockutils [req-2ad0b760-9b9e-4a9a-b0bd-f877c7c63c7e req-598542a2-db11-426b-94de-ba1cab5388a0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:12:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.014 256757 DEBUG nova.compute.manager [req-3f81fb5c-8cec-498d-a35b-9fc91879efbc req-3613147d-3351-41f3-b897-f788c63ea029 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.015 256757 DEBUG oslo_concurrency.lockutils [req-3f81fb5c-8cec-498d-a35b-9fc91879efbc req-3613147d-3351-41f3-b897-f788c63ea029 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.016 256757 DEBUG oslo_concurrency.lockutils [req-3f81fb5c-8cec-498d-a35b-9fc91879efbc req-3613147d-3351-41f3-b897-f788c63ea029 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.016 256757 DEBUG oslo_concurrency.lockutils [req-3f81fb5c-8cec-498d-a35b-9fc91879efbc req-3613147d-3351-41f3-b897-f788c63ea029 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.016 256757 DEBUG nova.compute.manager [req-3f81fb5c-8cec-498d-a35b-9fc91879efbc req-3613147d-3351-41f3-b897-f788c63ea029 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.017 256757 WARNING nova.compute.manager [req-3f81fb5c-8cec-498d-a35b-9fc91879efbc req-3613147d-3351-41f3-b897-f788c63ea029 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:12:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  7 05:12:15 np0005549474 nova_compute[256753]: 2025-12-07 10:12:15.312 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:15 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:15Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:92:26 10.100.0.25
Dec  7 05:12:15 np0005549474 ovn_controller[154296]: 2025-12-07T10:12:15Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:92:26 10.100.0.25
Dec  7 05:12:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:15.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:16 np0005549474 nova_compute[256753]: 2025-12-07 10:12:16.772 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:16.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Dec  7 05:12:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:17.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:12:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:17 np0005549474 nova_compute[256753]: 2025-12-07 10:12:17.565 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:17.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:17 np0005549474 nova_compute[256753]: 2025-12-07 10:12:17.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:18.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec  7 05:12:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:19.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:19 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:19.621 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.777 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.778 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.778 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.778 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:12:19 np0005549474 nova_compute[256753]: 2025-12-07 10:12:19.779 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:12:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:19] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:12:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:19] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:12:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:12:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438126098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.262 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.314 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.339 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.340 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.629 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.631 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4348MB free_disk=59.94270324707031GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.631 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.631 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.780 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Instance daa0d61c-ce51-4a65-82e0-106c2654ed92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.780 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.781 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.837 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing inventories for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  7 05:12:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:20.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.930 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating ProviderTree inventory for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.931 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating inventory in ProviderTree for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  7 05:12:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.952 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing aggregate associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  7 05:12:20 np0005549474 nova_compute[256753]: 2025-12-07 10:12:20.978 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing trait associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  7 05:12:21 np0005549474 nova_compute[256753]: 2025-12-07 10:12:21.012 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:12:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Dec  7 05:12:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:12:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1725739159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:12:21 np0005549474 nova_compute[256753]: 2025-12-07 10:12:21.464 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:12:21 np0005549474 nova_compute[256753]: 2025-12-07 10:12:21.472 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:12:21 np0005549474 nova_compute[256753]: 2025-12-07 10:12:21.499 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:12:21 np0005549474 nova_compute[256753]: 2025-12-07 10:12:21.527 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:12:21 np0005549474 nova_compute[256753]: 2025-12-07 10:12:21.527 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:21.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:22 np0005549474 nova_compute[256753]: 2025-12-07 10:12:22.568 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:22.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 6.3 KiB/s wr, 1 op/s
Dec  7 05:12:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.527 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.527 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.528 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.528 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:12:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:23.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.721 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.721 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.722 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  7 05:12:23 np0005549474 nova_compute[256753]: 2025-12-07 10:12:23.722 256757 DEBUG nova.objects.instance [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lazy-loading 'info_cache' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:12:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:24.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 163 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Dec  7 05:12:25 np0005549474 nova_compute[256753]: 2025-12-07 10:12:25.316 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:25.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:26 np0005549474 nova_compute[256753]: 2025-12-07 10:12:26.423 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:12:26 np0005549474 nova_compute[256753]: 2025-12-07 10:12:26.472 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:12:26 np0005549474 nova_compute[256753]: 2025-12-07 10:12:26.473 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  7 05:12:26 np0005549474 nova_compute[256753]: 2025-12-07 10:12:26.474 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:26 np0005549474 nova_compute[256753]: 2025-12-07 10:12:26.474 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:26.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 163 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Dec  7 05:12:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:27.153Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:12:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:27.153Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:12:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:27.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:12:27 np0005549474 podman[271180]: 2025-12-07 10:12:27.334527435 +0000 UTC m=+0.127823654 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec  7 05:12:27 np0005549474 podman[271181]: 2025-12-07 10:12:27.335276875 +0000 UTC m=+0.127490435 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  7 05:12:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:12:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:12:27 np0005549474 nova_compute[256753]: 2025-12-07 10:12:27.570 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:27.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:28.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:12:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:29.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:29 np0005549474 nova_compute[256753]: 2025-12-07 10:12:29.696 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:29] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:12:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:29] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:12:30 np0005549474 nova_compute[256753]: 2025-12-07 10:12:30.318 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:30.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:12:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:31.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:32 np0005549474 nova_compute[256753]: 2025-12-07 10:12:32.572 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:32.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880049a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:12:33 np0005549474 podman[271232]: 2025-12-07 10:12:33.279592781 +0000 UTC m=+0.088669858 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  7 05:12:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:33.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  7 05:12:35 np0005549474 nova_compute[256753]: 2025-12-07 10:12:35.372 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:35.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 75 op/s
Dec  7 05:12:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:37.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:12:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:37 np0005549474 nova_compute[256753]: 2025-12-07 10:12:37.604 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:38.623 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:38.623 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:38.624 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 75 op/s
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.371660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359371737, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2121, "num_deletes": 251, "total_data_size": 4269350, "memory_usage": 4336472, "flush_reason": "Manual Compaction"}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359408124, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4137595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24879, "largest_seqno": 26999, "table_properties": {"data_size": 4128037, "index_size": 5988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19825, "raw_average_key_size": 20, "raw_value_size": 4109046, "raw_average_value_size": 4218, "num_data_blocks": 262, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102151, "oldest_key_time": 1765102151, "file_creation_time": 1765102359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 36634 microseconds, and 15559 cpu microseconds.
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.408302) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4137595 bytes OK
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.408348) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.410533) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.410546) EVENT_LOG_v1 {"time_micros": 1765102359410542, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.410561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4260648, prev total WAL file size 4260648, number of live WAL files 2.
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.411846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4040KB)], [56(12MB)]
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359411910, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17131399, "oldest_snapshot_seqno": -1}
Dec  7 05:12:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5958 keys, 15015189 bytes, temperature: kUnknown
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359565021, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 15015189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14974425, "index_size": 24782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14917, "raw_key_size": 151430, "raw_average_key_size": 25, "raw_value_size": 14866078, "raw_average_value_size": 2495, "num_data_blocks": 1012, "num_entries": 5958, "num_filter_entries": 5958, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.565433) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15015189 bytes
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.566884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 98.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.4 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6478, records dropped: 520 output_compression: NoCompression
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.566957) EVENT_LOG_v1 {"time_micros": 1765102359566900, "job": 30, "event": "compaction_finished", "compaction_time_micros": 153254, "compaction_time_cpu_micros": 35831, "output_level": 6, "num_output_files": 1, "total_output_size": 15015189, "num_input_records": 6478, "num_output_records": 5958, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359568940, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359573346, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.411748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.573436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.573442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.573445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.573447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:12:39 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:12:39.573450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:12:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:39.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:39] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:12:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:39] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Dec  7 05:12:40 np0005549474 nova_compute[256753]: 2025-12-07 10:12:40.375 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:40.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  7 05:12:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:41.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:12:42
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.mgr', 'images', '.nfs', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:12:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:12:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:12:42 np0005549474 nova_compute[256753]: 2025-12-07 10:12:42.611 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011092763072214624 of space, bias 1.0, pg target 0.33278289216643875 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:12:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:12:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:42.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec  7 05:12:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:43.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:44.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:44 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 188 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Dec  7 05:12:45 np0005549474 nova_compute[256753]: 2025-12-07 10:12:45.408 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:45.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 188 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Dec  7 05:12:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:47.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:12:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:47.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:12:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:47 np0005549474 nova_compute[256753]: 2025-12-07 10:12:47.612 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 196 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec  7 05:12:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:49] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:12:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:49] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:12:50 np0005549474 nova_compute[256753]: 2025-12-07 10:12:50.411 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:50.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800abe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec  7 05:12:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800abe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:51.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.615 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:52 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:52.624 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.624 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:52 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:52.625 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:12:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.807 256757 DEBUG nova.compute.manager [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-changed-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.807 256757 DEBUG nova.compute.manager [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing instance network info cache due to event network-changed-fbe265e8-4ccb-490c-b57d-5c1633844053. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.807 256757 DEBUG oslo_concurrency.lockutils [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.807 256757 DEBUG oslo_concurrency.lockutils [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:12:52 np0005549474 nova_compute[256753]: 2025-12-07 10:12:52.808 256757 DEBUG nova.network.neutron [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing network info cache for port fbe265e8-4ccb-490c-b57d-5c1633844053 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:12:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:52.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec  7 05:12:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.594 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.616 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Triggering sync for uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.617 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.618 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.653 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:12:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:53.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.730 256757 DEBUG nova.network.neutron [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updated VIF entry in instance network info cache for port fbe265e8-4ccb-490c-b57d-5c1633844053. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.730 256757 DEBUG nova.network.neutron [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:12:53 np0005549474 nova_compute[256753]: 2025-12-07 10:12:53.747 256757 DEBUG oslo_concurrency.lockutils [req-8ce7d6c4-5d51-46b0-b06b-5a282e07644c req-51655308-a2d7-4a4f-ba34-6d4986320f67 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:12:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800abe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  7 05:12:55 np0005549474 nova_compute[256753]: 2025-12-07 10:12:55.413 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800abe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:55.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:56.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 110 KiB/s wr, 20 op/s
Dec  7 05:12:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:12:57.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:12:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:12:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:12:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15680040f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:57 np0005549474 nova_compute[256753]: 2025-12-07 10:12:57.617 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:12:57 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:12:57.626 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:12:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:12:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:57.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:12:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:12:58 np0005549474 podman[271303]: 2025-12-07 10:12:58.266164578 +0000 UTC m=+0.071791197 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  7 05:12:58 np0005549474 podman[271304]: 2025-12-07 10:12:58.301265885 +0000 UTC m=+0.107769758 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  7 05:12:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ac00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:58.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 134 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 112 KiB/s wr, 40 op/s
Dec  7 05:12:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:12:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:12:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:12:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:12:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:59.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:12:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:59] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:12:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:12:59] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:13:00 np0005549474 nova_compute[256753]: 2025-12-07 10:13:00.416 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:00.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ac20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 31 KiB/s wr, 30 op/s
Dec  7 05:13:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:01.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.619 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.682 256757 DEBUG oslo_concurrency.lockutils [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "interface-daa0d61c-ce51-4a65-82e0-106c2654ed92-fbe265e8-4ccb-490c-b57d-5c1633844053" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.683 256757 DEBUG oslo_concurrency.lockutils [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "interface-daa0d61c-ce51-4a65-82e0-106c2654ed92-fbe265e8-4ccb-490c-b57d-5c1633844053" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.700 256757 DEBUG nova.objects.instance [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'flavor' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.724 256757 DEBUG nova.virt.libvirt.vif [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.724 256757 DEBUG nova.network.os_vif_util [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.725 256757 DEBUG nova.network.os_vif_util [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.730 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.734 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.737 256757 DEBUG nova.virt.libvirt.driver [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Attempting to detach device tapfbe265e8-4c from instance daa0d61c-ce51-4a65-82e0-106c2654ed92 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.738 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] detach device xml: <interface type="ethernet">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <mac address="fa:16:3e:39:92:26"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <model type="virtio"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <mtu size="1442"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <target dev="tapfbe265e8-4c"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </interface>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  7 05:13:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.747 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.752 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface>not found in domain: <domain type='kvm' id='3'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <name>instance-00000006</name>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <uuid>daa0d61c-ce51-4a65-82e0-106c2654ed92</uuid>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:12:12</nova:creationTime>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:port uuid="fbe265e8-4ccb-490c-b57d-5c1633844053">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <memory unit='KiB'>131072</memory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <vcpu placement='static'>1</vcpu>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <resource>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <partition>/machine</partition>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </resource>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <sysinfo type='smbios'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='manufacturer'>RDO</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='product'>OpenStack Compute</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='serial'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='uuid'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='family'>Virtual Machine</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <boot dev='hd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <smbios mode='sysinfo'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <vmcoreinfo state='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <cpu mode='custom' match='exact' check='full'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <vendor>AMD</vendor>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='x2apic'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc-deadline'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='hypervisor'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc_adjust'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='spec-ctrl'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='stibp'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='ssbd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='cmp_legacy'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='overflow-recov'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='succor'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='ibrs'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='amd-ssbd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='virt-ssbd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='lbrv'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='tsc-scale'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='vmcb-clean'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='flushbyasid'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pause-filter'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pfthreshold'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='xsaves'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svm'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='topoext'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='npt'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='nrip-save'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <clock offset='utc'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <timer name='pit' tickpolicy='delay'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <timer name='hpet' present='no'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <on_poweroff>destroy</on_poweroff>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <on_reboot>restart</on_reboot>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <on_crash>destroy</on_crash>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <disk type='network' device='disk'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk' index='2'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='vda' bus='virtio'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='virtio-disk0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <disk type='network' device='cdrom'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config' index='1'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='sda' bus='sata'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <readonly/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='sata0-0-0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='0' model='pcie-root'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pcie.0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='1' port='0x10'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='2' port='0x11'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='3' port='0x12'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='4' port='0x13'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='5' port='0x14'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='6' port='0x15'/>
Dec  7 05:13:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15680040f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='7' port='0x16'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='8' port='0x17'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.8'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='9' port='0x18'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.9'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='10' port='0x19'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.10'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='11' port='0x1a'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.11'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='12' port='0x1b'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.12'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='13' port='0x1c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.13'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='14' port='0x1d'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.14'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='15' port='0x1e'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.15'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='16' port='0x1f'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.16'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='17' port='0x20'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.17'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='18' port='0x21'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.18'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='19' port='0x22'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.19'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='20' port='0x23'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.20'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='21' port='0x24'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.21'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='22' port='0x25'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.22'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='23' port='0x26'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.23'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='24' port='0x27'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.24'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='25' port='0x28'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.25'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-pci-bridge'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.26'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='usb'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='sata' index='0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='ide'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <interface type='ethernet'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <mac address='fa:16:3e:8c:0c:76'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='tap4109af21-a3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model type='virtio'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='vhost' rx_queue_size='512'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <mtu size='1442'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='net0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <interface type='ethernet'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <mac address='fa:16:3e:39:92:26'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='tapfbe265e8-4c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model type='virtio'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='vhost' rx_queue_size='512'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <mtu size='1442'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='net1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <serial type='pty'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target type='isa-serial' port='0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <model name='isa-serial'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </target>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <console type='pty' tty='/dev/pts/0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target type='serial' port='0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </console>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <input type='tablet' bus='usb'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='input0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='usb' bus='0' port='1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <input type='mouse' bus='ps2'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='input1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <input type='keyboard' bus='ps2'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='input2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <listen type='address' address='::0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </graphics>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <audio id='1' type='none'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model type='virtio' heads='1' primary='yes'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='video0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <watchdog model='itco' action='reset'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='watchdog0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </watchdog>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <memballoon model='virtio'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <stats period='10'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='balloon0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <rng model='virtio'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <backend model='random'>/dev/urandom</backend>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='rng0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <label>system_u:system_r:svirt_t:s0:c543,c992</label>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c543,c992</imagelabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <label>+107:+107</label>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <imagelabel>+107:+107</imagelabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.753 256757 INFO nova.virt.libvirt.driver [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully detached device tapfbe265e8-4c from instance daa0d61c-ce51-4a65-82e0-106c2654ed92 from the persistent domain config.#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.753 256757 DEBUG nova.virt.libvirt.driver [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] (1/8): Attempting to detach device tapfbe265e8-4c with device alias net1 from instance daa0d61c-ce51-4a65-82e0-106c2654ed92 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.754 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] detach device xml: <interface type="ethernet">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <mac address="fa:16:3e:39:92:26"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <model type="virtio"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <mtu size="1442"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <target dev="tapfbe265e8-4c"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </interface>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  7 05:13:02 np0005549474 kernel: tapfbe265e8-4c (unregistering): left promiscuous mode
Dec  7 05:13:02 np0005549474 NetworkManager[49051]: <info>  [1765102382.8790] device (tapfbe265e8-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.890 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:02Z|00061|binding|INFO|Releasing lport fbe265e8-4ccb-490c-b57d-5c1633844053 from this chassis (sb_readonly=0)
Dec  7 05:13:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:02Z|00062|binding|INFO|Setting lport fbe265e8-4ccb-490c-b57d-5c1633844053 down in Southbound
Dec  7 05:13:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:02Z|00063|binding|INFO|Removing iface tapfbe265e8-4c ovn-installed in OVS
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.893 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.896 256757 DEBUG nova.virt.libvirt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Received event <DeviceRemovedEvent: 1765102382.8964207, daa0d61c-ce51-4a65-82e0-106c2654ed92 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.901 256757 DEBUG nova.virt.libvirt.driver [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Start waiting for the detach event from libvirt for device tapfbe265e8-4c with device alias net1 for instance daa0d61c-ce51-4a65-82e0-106c2654ed92 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.901 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  7 05:13:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:02.901 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:92:26 10.100.0.25', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'daa0d61c-ce51-4a65-82e0-106c2654ed92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e50e4dbc-db48-44c0-b801-323654e1b24c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9e687d9-7923-4eb6-b2b9-ae9c6837acef, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=fbe265e8-4ccb-490c-b57d-5c1633844053) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:13:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:02.903 164143 INFO neutron.agent.ovn.metadata.agent [-] Port fbe265e8-4ccb-490c-b57d-5c1633844053 in datapath e50e4dbc-db48-44c0-b801-323654e1b24c unbound from our chassis#033[00m
Dec  7 05:13:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:02.907 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e50e4dbc-db48-44c0-b801-323654e1b24c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.911 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface>not found in domain: <domain type='kvm' id='3'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <name>instance-00000006</name>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <uuid>daa0d61c-ce51-4a65-82e0-106c2654ed92</uuid>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:12:12</nova:creationTime>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:port uuid="fbe265e8-4ccb-490c-b57d-5c1633844053">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <memory unit='KiB'>131072</memory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <vcpu placement='static'>1</vcpu>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <resource>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <partition>/machine</partition>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </resource>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <sysinfo type='smbios'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='manufacturer'>RDO</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='product'>OpenStack Compute</entry>
Dec  7 05:13:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:02.909 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[741c5486-25f8-48f9-8328-9b420dee953c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:13:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:02.911 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c namespace which is not needed anymore#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='serial'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='uuid'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <entry name='family'>Virtual Machine</entry>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <boot dev='hd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <smbios mode='sysinfo'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <vmcoreinfo state='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <cpu mode='custom' match='exact' check='full'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <vendor>AMD</vendor>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='x2apic'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc-deadline'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='hypervisor'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc_adjust'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='spec-ctrl'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='stibp'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='ssbd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='cmp_legacy'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='overflow-recov'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='succor'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='ibrs'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='amd-ssbd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='virt-ssbd'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='lbrv'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='tsc-scale'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='vmcb-clean'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='flushbyasid'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pause-filter'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pfthreshold'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='xsaves'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svm'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='require' name='topoext'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='npt'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <feature policy='disable' name='nrip-save'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <clock offset='utc'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <timer name='pit' tickpolicy='delay'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <timer name='hpet' present='no'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <on_poweroff>destroy</on_poweroff>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <on_reboot>restart</on_reboot>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <on_crash>destroy</on_crash>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <disk type='network' device='disk'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk' index='2'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='vda' bus='virtio'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='virtio-disk0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <disk type='network' device='cdrom'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config' index='1'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='sda' bus='sata'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <readonly/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='sata0-0-0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='0' model='pcie-root'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pcie.0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='1' port='0x10'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='2' port='0x11'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='3' port='0x12'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='4' port='0x13'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='5' port='0x14'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='6' port='0x15'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='7' port='0x16'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='8' port='0x17'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.8'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='9' port='0x18'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.9'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='10' port='0x19'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.10'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='11' port='0x1a'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.11'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='12' port='0x1b'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.12'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='13' port='0x1c'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.13'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='14' port='0x1d'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.14'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='15' port='0x1e'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.15'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='16' port='0x1f'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.16'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='17' port='0x20'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.17'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='18' port='0x21'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.18'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='19' port='0x22'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.19'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='20' port='0x23'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.20'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='21' port='0x24'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.21'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='22' port='0x25'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.22'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='23' port='0x26'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.23'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='24' port='0x27'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.24'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target chassis='25' port='0x28'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.25'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model name='pcie-pci-bridge'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='pci.26'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='usb'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <controller type='sata' index='0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='ide'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <interface type='ethernet'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <mac address='fa:16:3e:8c:0c:76'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target dev='tap4109af21-a3'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model type='virtio'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <driver name='vhost' rx_queue_size='512'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <mtu size='1442'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='net0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <serial type='pty'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target type='isa-serial' port='0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:        <model name='isa-serial'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      </target>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <console type='pty' tty='/dev/pts/0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <target type='serial' port='0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </console>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <input type='tablet' bus='usb'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='input0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='usb' bus='0' port='1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <input type='mouse' bus='ps2'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='input1'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <input type='keyboard' bus='ps2'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='input2'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <listen type='address' address='::0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </graphics>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <audio id='1' type='none'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <model type='virtio' heads='1' primary='yes'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='video0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <watchdog model='itco' action='reset'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='watchdog0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </watchdog>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <memballoon model='virtio'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <stats period='10'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='balloon0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <rng model='virtio'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <backend model='random'>/dev/urandom</backend>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <alias name='rng0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <label>system_u:system_r:svirt_t:s0:c543,c992</label>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c543,c992</imagelabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <label>+107:+107</label>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <imagelabel>+107:+107</imagelabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.912 256757 INFO nova.virt.libvirt.driver [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully detached device tapfbe265e8-4c from instance daa0d61c-ce51-4a65-82e0-106c2654ed92 from the live domain config.#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.913 256757 DEBUG nova.virt.libvirt.vif [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.914 256757 DEBUG nova.network.os_vif_util [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.915 256757 DEBUG nova.network.os_vif_util [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.915 256757 DEBUG os_vif [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.919 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.920 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbe265e8-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.922 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.924 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.925 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.929 256757 INFO os_vif [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c')#033[00m
Dec  7 05:13:02 np0005549474 nova_compute[256753]: 2025-12-07 10:13:02.930 256757 DEBUG nova.virt.libvirt.guest [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:13:02</nova:creationTime>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:13:02 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:02 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:13:02 np0005549474 nova_compute[256753]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  7 05:13:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:13:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:13:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c004cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:03 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [NOTICE]   (271084) : haproxy version is 2.8.14-c23fe91
Dec  7 05:13:03 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [NOTICE]   (271084) : path to executable is /usr/sbin/haproxy
Dec  7 05:13:03 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [WARNING]  (271084) : Exiting Master process...
Dec  7 05:13:03 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [WARNING]  (271084) : Exiting Master process...
Dec  7 05:13:03 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [ALERT]    (271084) : Current worker (271086) exited with code 143 (Terminated)
Dec  7 05:13:03 np0005549474 neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c[271080]: [WARNING]  (271084) : All workers exited. Exiting... (0)
Dec  7 05:13:03 np0005549474 systemd[1]: libpod-be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629.scope: Deactivated successfully.
Dec  7 05:13:03 np0005549474 podman[271377]: 2025-12-07 10:13:03.064051304 +0000 UTC m=+0.052991445 container died be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.086 256757 DEBUG nova.compute.manager [req-c06605f3-8caf-4ec7-bba2-2f1a5a12f0c9 req-2dde5f6f-07c3-429d-aef7-ef9f5ec99b16 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-unplugged-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.086 256757 DEBUG oslo_concurrency.lockutils [req-c06605f3-8caf-4ec7-bba2-2f1a5a12f0c9 req-2dde5f6f-07c3-429d-aef7-ef9f5ec99b16 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.087 256757 DEBUG oslo_concurrency.lockutils [req-c06605f3-8caf-4ec7-bba2-2f1a5a12f0c9 req-2dde5f6f-07c3-429d-aef7-ef9f5ec99b16 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.087 256757 DEBUG oslo_concurrency.lockutils [req-c06605f3-8caf-4ec7-bba2-2f1a5a12f0c9 req-2dde5f6f-07c3-429d-aef7-ef9f5ec99b16 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.087 256757 DEBUG nova.compute.manager [req-c06605f3-8caf-4ec7-bba2-2f1a5a12f0c9 req-2dde5f6f-07c3-429d-aef7-ef9f5ec99b16 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-unplugged-fbe265e8-4ccb-490c-b57d-5c1633844053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.088 256757 WARNING nova.compute.manager [req-c06605f3-8caf-4ec7-bba2-2f1a5a12f0c9 req-2dde5f6f-07c3-429d-aef7-ef9f5ec99b16 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-unplugged-fbe265e8-4ccb-490c-b57d-5c1633844053 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:13:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629-userdata-shm.mount: Deactivated successfully.
Dec  7 05:13:03 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6a41bad89a4f1b53554be953966468d557a451730a3ffdf712af00a4138ac17a-merged.mount: Deactivated successfully.
Dec  7 05:13:03 np0005549474 podman[271377]: 2025-12-07 10:13:03.11637927 +0000 UTC m=+0.105319391 container cleanup be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  7 05:13:03 np0005549474 systemd[1]: libpod-conmon-be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629.scope: Deactivated successfully.
Dec  7 05:13:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 121 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 30 op/s
Dec  7 05:13:03 np0005549474 podman[271403]: 2025-12-07 10:13:03.187190419 +0000 UTC m=+0.048061110 container remove be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.197 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[576dabf7-b7e0-4080-ae78-ef856b777af6]: (4, ('Sun Dec  7 10:13:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c (be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629)\nbe3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629\nSun Dec  7 10:13:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c (be3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629)\nbe3f77b2f4852b3335a15b9a90ee100d5187fb89a655028f2ade0be47afc1629\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.199 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9ddfcb-988f-478f-8a52-8d6e63594bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.201 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50e4dbc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:13:03 np0005549474 kernel: tape50e4dbc-d0: left promiscuous mode
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.251 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.268 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.272 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9ef4dc-551e-4e08-9b59-c0a9b8d956b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.296 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[a6437442-e1f8-45a9-8d03-50a31f72bfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.297 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[de2a043c-079e-4760-ad74-02799672b7e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.312 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[76a6c121-4257-48fa-b0f2-11b6ba57870c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429536, 'reachable_time': 34483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271416, 'error': None, 'target': 'ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.315 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e50e4dbc-db48-44c0-b801-323654e1b24c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:13:03 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:03.315 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[f73ef118-42ae-4d16-8906-8edb3aaf652b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:03 np0005549474 systemd[1]: run-netns-ovnmeta\x2de50e4dbc\x2ddb48\x2d44c0\x2db801\x2d323654e1b24c.mount: Deactivated successfully.
Dec  7 05:13:03 np0005549474 podman[271417]: 2025-12-07 10:13:03.414516763 +0000 UTC m=+0.061898617 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  7 05:13:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ac40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/101303 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.658 256757 DEBUG oslo_concurrency.lockutils [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.659 256757 DEBUG oslo_concurrency.lockutils [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:13:03 np0005549474 nova_compute[256753]: 2025-12-07 10:13:03.659 256757 DEBUG nova.network.neutron [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:13:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:03.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:04 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:05 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:05Z|00064|binding|INFO|Releasing lport 5f556ba9-478e-466f-a4d9-dec36f26c0bf from this chassis (sb_readonly=0)
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.061 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 30 op/s
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.216 256757 DEBUG nova.compute.manager [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.217 256757 DEBUG oslo_concurrency.lockutils [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.217 256757 DEBUG oslo_concurrency.lockutils [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.218 256757 DEBUG oslo_concurrency.lockutils [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.218 256757 DEBUG nova.compute.manager [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.218 256757 WARNING nova.compute.manager [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-plugged-fbe265e8-4ccb-490c-b57d-5c1633844053 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.219 256757 DEBUG nova.compute.manager [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-deleted-fbe265e8-4ccb-490c-b57d-5c1633844053 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.219 256757 INFO nova.compute.manager [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Neutron deleted interface fbe265e8-4ccb-490c-b57d-5c1633844053; detaching it from the instance and deleting it from the info cache#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.219 256757 DEBUG nova.network.neutron [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.245 256757 DEBUG nova.objects.instance [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lazy-loading 'system_metadata' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.271 256757 DEBUG nova.objects.instance [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lazy-loading 'flavor' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.292 256757 DEBUG nova.virt.libvirt.vif [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.292 256757 DEBUG nova.network.os_vif_util [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Converting VIF {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.294 256757 DEBUG nova.network.os_vif_util [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.297 256757 DEBUG nova.virt.libvirt.guest [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.300 256757 DEBUG nova.virt.libvirt.guest [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface>not found in domain: <domain type='kvm' id='3'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <name>instance-00000006</name>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <uuid>daa0d61c-ce51-4a65-82e0-106c2654ed92</uuid>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:13:02</nova:creationTime>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <memory unit='KiB'>131072</memory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <vcpu placement='static'>1</vcpu>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <resource>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <partition>/machine</partition>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </resource>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <sysinfo type='smbios'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='manufacturer'>RDO</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='product'>OpenStack Compute</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='serial'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='uuid'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='family'>Virtual Machine</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <boot dev='hd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <smbios mode='sysinfo'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <vmcoreinfo state='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <cpu mode='custom' match='exact' check='full'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <vendor>AMD</vendor>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='x2apic'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc-deadline'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='hypervisor'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc_adjust'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='spec-ctrl'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='stibp'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='ssbd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='cmp_legacy'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='overflow-recov'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='succor'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='ibrs'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='amd-ssbd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='virt-ssbd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='lbrv'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='tsc-scale'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='vmcb-clean'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='flushbyasid'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pause-filter'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pfthreshold'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='xsaves'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svm'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='topoext'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='npt'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='nrip-save'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <clock offset='utc'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <timer name='pit' tickpolicy='delay'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <timer name='hpet' present='no'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <on_poweroff>destroy</on_poweroff>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <on_reboot>restart</on_reboot>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <on_crash>destroy</on_crash>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <disk type='network' device='disk'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk' index='2'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target dev='vda' bus='virtio'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='virtio-disk0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <disk type='network' device='cdrom'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config' index='1'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target dev='sda' bus='sata'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <readonly/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='sata0-0-0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='0' model='pcie-root'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pcie.0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='1' port='0x10'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='2' port='0x11'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='3' port='0x12'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='4' port='0x13'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='5' port='0x14'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='6' port='0x15'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='7' port='0x16'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='8' port='0x17'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.8'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='9' port='0x18'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.9'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='10' port='0x19'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.10'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='11' port='0x1a'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.11'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='12' port='0x1b'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.12'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='13' port='0x1c'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.13'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='14' port='0x1d'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.14'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='15' port='0x1e'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.15'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='16' port='0x1f'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.16'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='17' port='0x20'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.17'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='18' port='0x21'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.18'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='19' port='0x22'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.19'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='20' port='0x23'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.20'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='21' port='0x24'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.21'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='22' port='0x25'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.22'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='23' port='0x26'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.23'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='24' port='0x27'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.24'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='25' port='0x28'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.25'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-pci-bridge'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.26'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='usb'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='sata' index='0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='ide'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <interface type='ethernet'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <mac address='fa:16:3e:8c:0c:76'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target dev='tap4109af21-a3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model type='virtio'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <driver name='vhost' rx_queue_size='512'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <mtu size='1442'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='net0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <serial type='pty'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target type='isa-serial' port='0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <model name='isa-serial'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </target>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <console type='pty' tty='/dev/pts/0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target type='serial' port='0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </console>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <input type='tablet' bus='usb'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='input0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='usb' bus='0' port='1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <input type='mouse' bus='ps2'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='input1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <input type='keyboard' bus='ps2'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='input2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <listen type='address' address='::0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </graphics>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <audio id='1' type='none'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model type='virtio' heads='1' primary='yes'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='video0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <watchdog model='itco' action='reset'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='watchdog0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </watchdog>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <memballoon model='virtio'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <stats period='10'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='balloon0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <rng model='virtio'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <backend model='random'>/dev/urandom</backend>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='rng0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <label>system_u:system_r:svirt_t:s0:c543,c992</label>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c543,c992</imagelabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <label>+107:+107</label>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <imagelabel>+107:+107</imagelabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.300 256757 DEBUG nova.virt.libvirt.guest [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.304 256757 DEBUG nova.virt.libvirt.guest [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:39:92:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfbe265e8-4c"/></interface>not found in domain: <domain type='kvm' id='3'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <name>instance-00000006</name>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <uuid>daa0d61c-ce51-4a65-82e0-106c2654ed92</uuid>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:13:02</nova:creationTime>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <memory unit='KiB'>131072</memory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <vcpu placement='static'>1</vcpu>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <resource>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <partition>/machine</partition>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </resource>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <sysinfo type='smbios'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='manufacturer'>RDO</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='product'>OpenStack Compute</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='serial'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='uuid'>daa0d61c-ce51-4a65-82e0-106c2654ed92</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <entry name='family'>Virtual Machine</entry>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <boot dev='hd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <smbios mode='sysinfo'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <vmcoreinfo state='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <cpu mode='custom' match='exact' check='full'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <model fallback='forbid'>EPYC-Rome</model>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <vendor>AMD</vendor>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='x2apic'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc-deadline'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='hypervisor'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='tsc_adjust'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='spec-ctrl'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='stibp'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='ssbd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='cmp_legacy'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='overflow-recov'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='succor'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='ibrs'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='amd-ssbd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='virt-ssbd'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='lbrv'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='tsc-scale'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='vmcb-clean'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='flushbyasid'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pause-filter'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='pfthreshold'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svme-addr-chk'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='lfence-always-serializing'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='xsaves'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='svm'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='require' name='topoext'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='npt'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <feature policy='disable' name='nrip-save'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <clock offset='utc'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <timer name='pit' tickpolicy='delay'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <timer name='hpet' present='no'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <on_poweroff>destroy</on_poweroff>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <on_reboot>restart</on_reboot>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <on_crash>destroy</on_crash>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <disk type='network' device='disk'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk' index='2'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target dev='vda' bus='virtio'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='virtio-disk0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <disk type='network' device='cdrom'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <driver name='qemu' type='raw' cache='none'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <auth username='openstack'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <secret type='ceph' uuid='75f4c9fd-539a-5e17-b55a-0a12a4e2736c'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source protocol='rbd' name='vms/daa0d61c-ce51-4a65-82e0-106c2654ed92_disk.config' index='1'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.100' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.102' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <host name='192.168.122.101' port='6789'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target dev='sda' bus='sata'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <readonly/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='sata0-0-0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='0' model='pcie-root'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pcie.0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='1' port='0x10'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='2' port='0x11'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='3' port='0x12'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='4' port='0x13'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='5' port='0x14'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='6' port='0x15'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='7' port='0x16'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='8' port='0x17'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.8'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='9' port='0x18'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.9'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='10' port='0x19'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.10'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='11' port='0x1a'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.11'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='12' port='0x1b'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.12'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='13' port='0x1c'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.13'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='14' port='0x1d'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.14'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='15' port='0x1e'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.15'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='16' port='0x1f'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.16'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='17' port='0x20'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.17'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='18' port='0x21'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.18'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='19' port='0x22'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.19'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='20' port='0x23'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.20'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='21' port='0x24'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.21'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='22' port='0x25'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.22'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='23' port='0x26'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.23'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='24' port='0x27'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.24'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-root-port'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target chassis='25' port='0x28'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.25'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model name='pcie-pci-bridge'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='pci.26'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='usb'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <controller type='sata' index='0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='ide'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </controller>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <interface type='ethernet'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <mac address='fa:16:3e:8c:0c:76'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target dev='tap4109af21-a3'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model type='virtio'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <driver name='vhost' rx_queue_size='512'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <mtu size='1442'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='net0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <serial type='pty'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target type='isa-serial' port='0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:        <model name='isa-serial'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      </target>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <console type='pty' tty='/dev/pts/0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <source path='/dev/pts/0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <log file='/var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92/console.log' append='off'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <target type='serial' port='0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='serial0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </console>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <input type='tablet' bus='usb'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='input0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='usb' bus='0' port='1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <input type='mouse' bus='ps2'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='input1'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <input type='keyboard' bus='ps2'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='input2'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </input>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <listen type='address' address='::0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </graphics>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <audio id='1' type='none'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <model type='virtio' heads='1' primary='yes'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='video0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <watchdog model='itco' action='reset'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='watchdog0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </watchdog>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <memballoon model='virtio'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <stats period='10'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='balloon0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <rng model='virtio'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <backend model='random'>/dev/urandom</backend>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <alias name='rng0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <label>system_u:system_r:svirt_t:s0:c543,c992</label>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c543,c992</imagelabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <label>+107:+107</label>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <imagelabel>+107:+107</imagelabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </seclabel>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.304 256757 WARNING nova.virt.libvirt.driver [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Detaching interface fa:16:3e:39:92:26 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfbe265e8-4c' not found.#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.305 256757 DEBUG nova.virt.libvirt.vif [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.305 256757 DEBUG nova.network.os_vif_util [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Converting VIF {"id": "fbe265e8-4ccb-490c-b57d-5c1633844053", "address": "fa:16:3e:39:92:26", "network": {"id": "e50e4dbc-db48-44c0-b801-323654e1b24c", "bridge": "br-int", "label": "tempest-network-smoke--1488267578", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe265e8-4c", "ovs_interfaceid": "fbe265e8-4ccb-490c-b57d-5c1633844053", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.305 256757 DEBUG nova.network.os_vif_util [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.306 256757 DEBUG os_vif [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.307 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.307 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbe265e8-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.307 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.309 256757 INFO os_vif [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:92:26,bridge_name='br-int',has_traffic_filtering=True,id=fbe265e8-4ccb-490c-b57d-5c1633844053,network=Network(e50e4dbc-db48-44c0-b801-323654e1b24c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe265e8-4c')#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.309 256757 DEBUG nova.virt.libvirt.guest [req-0b657f88-0574-4673-b64f-568829a044b8 req-e884406d-0136-4cdc-bd11-0c473cc190fb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:name>tempest-TestNetworkBasicOps-server-1390661383</nova:name>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:creationTime>2025-12-07 10:13:05</nova:creationTime>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:flavor name="m1.nano">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:memory>128</nova:memory>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:disk>1</nova:disk>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:swap>0</nova:swap>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:vcpus>1</nova:vcpus>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:flavor>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:owner>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:owner>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  <nova:ports>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    <nova:port uuid="4109af21-a3da-49b5-8481-432b45bf7ea9">
Dec  7 05:13:05 np0005549474 nova_compute[256753]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:    </nova:port>
Dec  7 05:13:05 np0005549474 nova_compute[256753]:  </nova:ports>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: </nova:instance>
Dec  7 05:13:05 np0005549474 nova_compute[256753]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.400 256757 INFO nova.network.neutron [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Port fbe265e8-4ccb-490c-b57d-5c1633844053 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.400 256757 DEBUG nova.network.neutron [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [{"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.417 256757 DEBUG oslo_concurrency.lockutils [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:13:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:05 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.437 256757 DEBUG oslo_concurrency.lockutils [None req-85401f33-7216-4b3a-8486-a6371a78469d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "interface-daa0d61c-ce51-4a65-82e0-106c2654ed92-fbe265e8-4ccb-490c-b57d-5c1633844053" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.456 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:05.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.881 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.882 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.882 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.883 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.883 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.885 256757 INFO nova.compute.manager [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Terminating instance#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.887 256757 DEBUG nova.compute.manager [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  7 05:13:05 np0005549474 kernel: tap4109af21-a3 (unregistering): left promiscuous mode
Dec  7 05:13:05 np0005549474 NetworkManager[49051]: <info>  [1765102385.9518] device (tap4109af21-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:13:05 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:05Z|00065|binding|INFO|Releasing lport 4109af21-a3da-49b5-8481-432b45bf7ea9 from this chassis (sb_readonly=0)
Dec  7 05:13:05 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:05Z|00066|binding|INFO|Setting lport 4109af21-a3da-49b5-8481-432b45bf7ea9 down in Southbound
Dec  7 05:13:05 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:05Z|00067|binding|INFO|Removing iface tap4109af21-a3 ovn-installed in OVS
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.968 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:05 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.971 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:05 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:05.976 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0c:76 10.100.0.13'], port_security=['fa:16:3e:8c:0c:76 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'daa0d61c-ce51-4a65-82e0-106c2654ed92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8e92cf5-e64a-4378-8f87-c574612f73da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c4eafcc0-8a7b-4591-b838-69191e9c889f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172a7e02-4a4a-49c7-ab1a-d93e560044ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=4109af21-a3da-49b5-8481-432b45bf7ea9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:13:05 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:05.977 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 4109af21-a3da-49b5-8481-432b45bf7ea9 in datapath c8e92cf5-e64a-4378-8f87-c574612f73da unbound from our chassis#033[00m
Dec  7 05:13:05 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:05.978 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8e92cf5-e64a-4378-8f87-c574612f73da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:13:05 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:05.979 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc12aaa-1858-47a8-80b3-5ae1648c8b53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:05 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:05.979 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da namespace which is not needed anymore#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:05.999 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  7 05:13:06 np0005549474 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 17.882s CPU time.
Dec  7 05:13:06 np0005549474 systemd-machined[217882]: Machine qemu-3-instance-00000006 terminated.
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.108 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.117 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [NOTICE]   (270164) : haproxy version is 2.8.14-c23fe91
Dec  7 05:13:06 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [NOTICE]   (270164) : path to executable is /usr/sbin/haproxy
Dec  7 05:13:06 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [WARNING]  (270164) : Exiting Master process...
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.121 256757 INFO nova.virt.libvirt.driver [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Instance destroyed successfully.#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.122 256757 DEBUG nova.objects.instance [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'resources' on Instance uuid daa0d61c-ce51-4a65-82e0-106c2654ed92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:13:06 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [ALERT]    (270164) : Current worker (270166) exited with code 143 (Terminated)
Dec  7 05:13:06 np0005549474 neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da[270160]: [WARNING]  (270164) : All workers exited. Exiting... (0)
Dec  7 05:13:06 np0005549474 systemd[1]: libpod-b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db.scope: Deactivated successfully.
Dec  7 05:13:06 np0005549474 podman[271538]: 2025-12-07 10:13:06.137256739 +0000 UTC m=+0.059485751 container died b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.138 256757 DEBUG nova.virt.libvirt.vif [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1390661383',display_name='tempest-TestNetworkBasicOps-server-1390661383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1390661383',id=6,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+/vPI4e2NtH6h2oCv9Fj3XWNq8zUo66YWV86MfpANVywGXA0slkI6U0K669sWiaSD+5dsMO7JVa1SJLOuvVvWAjhWYnI3Sk4xqn8SB4wmPdrCzHMVsE1qT7PZGkKUz3w==',key_name='tempest-TestNetworkBasicOps-1190719752',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-rn85oa33',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:11:43Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=daa0d61c-ce51-4a65-82e0-106c2654ed92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.138 256757 DEBUG nova.network.os_vif_util [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "4109af21-a3da-49b5-8481-432b45bf7ea9", "address": "fa:16:3e:8c:0c:76", "network": {"id": "c8e92cf5-e64a-4378-8f87-c574612f73da", "bridge": "br-int", "label": "tempest-network-smoke--1681584642", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4109af21-a3", "ovs_interfaceid": "4109af21-a3da-49b5-8481-432b45bf7ea9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.139 256757 DEBUG nova.network.os_vif_util [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.140 256757 DEBUG os_vif [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.141 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.141 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4109af21-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.143 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.145 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.148 256757 INFO os_vif [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:0c:76,bridge_name='br-int',has_traffic_filtering=True,id=4109af21-a3da-49b5-8481-432b45bf7ea9,network=Network(c8e92cf5-e64a-4378-8f87-c574612f73da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4109af21-a3')#033[00m
Dec  7 05:13:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db-userdata-shm.mount: Deactivated successfully.
Dec  7 05:13:06 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4d93960dc1aa2b7d86cf4ea276ea4e3a8b1d587e1ee1a9c195105dc57b56cbb6-merged.mount: Deactivated successfully.
Dec  7 05:13:06 np0005549474 podman[271538]: 2025-12-07 10:13:06.196487303 +0000 UTC m=+0.118716315 container cleanup b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:13:06 np0005549474 systemd[1]: libpod-conmon-b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db.scope: Deactivated successfully.
Dec  7 05:13:06 np0005549474 podman[271605]: 2025-12-07 10:13:06.270900591 +0000 UTC m=+0.050478266 container remove b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.276 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3b270f-dbfc-4c42-9dd5-d76c7d794eb3]: (4, ('Sun Dec  7 10:13:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da (b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db)\nb92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db\nSun Dec  7 10:13:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da (b92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db)\nb92571a83a227e73f46d3b9e231b886cf78fa9a0ed4d8becf22aa0e27733a2db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.277 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0e888ec5-c757-4be1-acec-938dc398619b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.278 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8e92cf5-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.279 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 kernel: tapc8e92cf5-e0: left promiscuous mode
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.299 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.300 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.302 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c801d8-12bb-4b72-938c-70d9db840cf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.313 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0238eaf9-4eb5-44be-8fe5-5b74a2558b55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.314 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[2df27a79-4b6d-44f5-bcbe-a6a582f3d73c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.332 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[5c01ce59-9573-4b75-b7a7-8f9765cf3c4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426576, 'reachable_time': 20569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271623, 'error': None, 'target': 'ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.335 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8e92cf5-e64a-4378-8f87-c574612f73da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:13:06 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:06.335 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[3517f0bf-77d4-4d74-bc54-49c4beb4eaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:13:06 np0005549474 systemd[1]: run-netns-ovnmeta\x2dc8e92cf5\x2de64a\x2d4378\x2d8f87\x2dc574612f73da.mount: Deactivated successfully.
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.513 256757 INFO nova.virt.libvirt.driver [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Deleting instance files /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92_del#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.513 256757 INFO nova.virt.libvirt.driver [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Deletion of /var/lib/nova/instances/daa0d61c-ce51-4a65-82e0-106c2654ed92_del complete#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.575 256757 INFO nova.compute.manager [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.576 256757 DEBUG oslo.service.loopingcall [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.576 256757 DEBUG nova.compute.manager [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  7 05:13:06 np0005549474 nova_compute[256753]: 2025-12-07 10:13:06.577 256757 DEBUG nova.network.neutron [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  7 05:13:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800acf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:13:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:13:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:06 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15640019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 29 op/s
Dec  7 05:13:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:07.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:13:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:07.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:13:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:07.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.162 256757 DEBUG nova.network.neutron [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.180 256757 INFO nova.compute.manager [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Took 0.60 seconds to deallocate network for instance.#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.226 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.227 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.254 256757 DEBUG nova.compute.manager [req-017ad4f4-2a0c-4cad-97e1-0f1537aff3f0 req-55cd2a3c-e111-4710-a637-bf19b64ce584 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-deleted-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.299 256757 DEBUG oslo_concurrency.processutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.337 256757 DEBUG nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-changed-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.338 256757 DEBUG nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing instance network info cache due to event network-changed-4109af21-a3da-49b5-8481-432b45bf7ea9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.338 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.339 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.339 256757 DEBUG nova.network.neutron [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Refreshing network info cache for port 4109af21-a3da-49b5-8481-432b45bf7ea9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:13:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:07 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:13:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099550192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.734 256757 DEBUG oslo_concurrency.processutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:13:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.742 256757 DEBUG nova.compute.provider_tree [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.772 256757 DEBUG nova.scheduler.client.report [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.801 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.845 256757 DEBUG nova.network.neutron [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:13:07 np0005549474 nova_compute[256753]: 2025-12-07 10:13:07.848 256757 INFO nova.scheduler.client.report [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Deleted allocations for instance daa0d61c-ce51-4a65-82e0-106c2654ed92#033[00m
Dec  7 05:13:08 np0005549474 nova_compute[256753]: 2025-12-07 10:13:08.170 256757 DEBUG oslo_concurrency.lockutils [None req-481948be-9906-4d1a-99b4-fcb1ea6a5e3f 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:13:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:13:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:08.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:08 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800acf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 63 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 50 op/s
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.354 256757 DEBUG nova.network.neutron [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.377 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-daa0d61c-ce51-4a65-82e0-106c2654ed92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.377 256757 DEBUG nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-unplugged-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.377 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.377 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.377 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.378 256757 DEBUG nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-unplugged-4109af21-a3da-49b5-8481-432b45bf7ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.378 256757 WARNING nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-unplugged-4109af21-a3da-49b5-8481-432b45bf7ea9 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.378 256757 DEBUG nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.378 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.378 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.378 256757 DEBUG oslo_concurrency.lockutils [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "daa0d61c-ce51-4a65-82e0-106c2654ed92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.379 256757 DEBUG nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] No waiting events found dispatching network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:13:09 np0005549474 nova_compute[256753]: 2025-12-07 10:13:09.379 256757 WARNING nova.compute.manager [req-dcb10067-4d5c-4b0e-a24b-9fae0c571baf req-9fb12d9f-fd64-4273-a145-8de1f6b328cb ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Received unexpected event network-vif-plugged-4109af21-a3da-49b5-8481-432b45bf7ea9 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.423163949 +0000 UTC m=+0.040469153 container create 9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:13:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:09 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1564001a00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:09 np0005549474 systemd[1]: Started libpod-conmon-9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe.scope.
Dec  7 05:13:09 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.405870528 +0000 UTC m=+0.023175752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.508958737 +0000 UTC m=+0.126263961 container init 9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.520792509 +0000 UTC m=+0.138097743 container start 9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gould, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.524290744 +0000 UTC m=+0.141595978 container attach 9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gould, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec  7 05:13:09 np0005549474 priceless_gould[271776]: 167 167
Dec  7 05:13:09 np0005549474 systemd[1]: libpod-9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe.scope: Deactivated successfully.
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.526482224 +0000 UTC m=+0.143787428 container died 9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gould, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:13:09 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6d8201a234a803a7d80f791c4d822727c3f3030c7261043f6667be7738863128-merged.mount: Deactivated successfully.
Dec  7 05:13:09 np0005549474 podman[271759]: 2025-12-07 10:13:09.565767775 +0000 UTC m=+0.183072999 container remove 9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec  7 05:13:09 np0005549474 systemd[1]: libpod-conmon-9c2480500792ee3f3a64ad0dfd2c9adc5f14e8165e927d1431661ac023aee4fe.scope: Deactivated successfully.
Dec  7 05:13:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:09.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:09 np0005549474 podman[271801]: 2025-12-07 10:13:09.746109638 +0000 UTC m=+0.060834238 container create 16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dubinsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:13:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:13:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:09 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:13:09 np0005549474 systemd[1]: Started libpod-conmon-16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3.scope.
Dec  7 05:13:09 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:13:09 np0005549474 podman[271801]: 2025-12-07 10:13:09.726290018 +0000 UTC m=+0.041014598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:13:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9746a4cd4663d5d20962c1e3aeb1a07cf6fe5fc109b2226d2e2c9a1da8f925c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9746a4cd4663d5d20962c1e3aeb1a07cf6fe5fc109b2226d2e2c9a1da8f925c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9746a4cd4663d5d20962c1e3aeb1a07cf6fe5fc109b2226d2e2c9a1da8f925c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9746a4cd4663d5d20962c1e3aeb1a07cf6fe5fc109b2226d2e2c9a1da8f925c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:09 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9746a4cd4663d5d20962c1e3aeb1a07cf6fe5fc109b2226d2e2c9a1da8f925c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:09 np0005549474 podman[271801]: 2025-12-07 10:13:09.838964408 +0000 UTC m=+0.153689428 container init 16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 05:13:09 np0005549474 podman[271801]: 2025-12-07 10:13:09.849254299 +0000 UTC m=+0.163978859 container start 16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:13:09 np0005549474 podman[271801]: 2025-12-07 10:13:09.852551238 +0000 UTC m=+0.167275878 container attach 16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dubinsky, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 05:13:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:09] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:13:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:09] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Dec  7 05:13:10 np0005549474 stupefied_dubinsky[271817]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:13:10 np0005549474 stupefied_dubinsky[271817]: --> All data devices are unavailable
Dec  7 05:13:10 np0005549474 systemd[1]: libpod-16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3.scope: Deactivated successfully.
Dec  7 05:13:10 np0005549474 podman[271801]: 2025-12-07 10:13:10.215861198 +0000 UTC m=+0.530585758 container died 16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dubinsky, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:13:10 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b9746a4cd4663d5d20962c1e3aeb1a07cf6fe5fc109b2226d2e2c9a1da8f925c-merged.mount: Deactivated successfully.
Dec  7 05:13:10 np0005549474 podman[271801]: 2025-12-07 10:13:10.250858261 +0000 UTC m=+0.565582821 container remove 16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:13:10 np0005549474 systemd[1]: libpod-conmon-16cc10db4fabd0b4c4c42543f07093dc38950efdce633d463da4823193387dd3.scope: Deactivated successfully.
Dec  7 05:13:10 np0005549474 nova_compute[256753]: 2025-12-07 10:13:10.458 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:10 np0005549474 podman[271938]: 2025-12-07 10:13:10.910066203 +0000 UTC m=+0.063183953 container create f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:13:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:10.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:10 np0005549474 systemd[1]: Started libpod-conmon-f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e.scope.
Dec  7 05:13:10 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:13:10 np0005549474 podman[271938]: 2025-12-07 10:13:10.88649339 +0000 UTC m=+0.039611210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:13:10 np0005549474 podman[271938]: 2025-12-07 10:13:10.991925703 +0000 UTC m=+0.145043473 container init f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Dec  7 05:13:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:10 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:11 np0005549474 podman[271938]: 2025-12-07 10:13:11.002983654 +0000 UTC m=+0.156101434 container start f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hellman, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 05:13:11 np0005549474 quizzical_hellman[271956]: 167 167
Dec  7 05:13:11 np0005549474 podman[271938]: 2025-12-07 10:13:11.007117887 +0000 UTC m=+0.160235747 container attach f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:13:11 np0005549474 systemd[1]: libpod-f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e.scope: Deactivated successfully.
Dec  7 05:13:11 np0005549474 podman[271938]: 2025-12-07 10:13:11.009645075 +0000 UTC m=+0.162762855 container died f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hellman, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 05:13:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c8229d2b09ee883250bbeb558d9dc629661bbcbf57fb9172e3d8b391678257b2-merged.mount: Deactivated successfully.
Dec  7 05:13:11 np0005549474 podman[271938]: 2025-12-07 10:13:11.061611662 +0000 UTC m=+0.214729442 container remove f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 05:13:11 np0005549474 systemd[1]: libpod-conmon-f5c14c5459234d972fdb9d0ad36900b71e65854307abf690f6e0749fc7c8be6e.scope: Deactivated successfully.
Dec  7 05:13:11 np0005549474 nova_compute[256753]: 2025-12-07 10:13:11.143 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 11 KiB/s wr, 38 op/s
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.313726581 +0000 UTC m=+0.076348131 container create eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:13:11 np0005549474 systemd[1]: Started libpod-conmon-eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58.scope.
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.285526503 +0000 UTC m=+0.048148103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:13:11 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:13:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46f9cc9be1967bdc4f2ef2359ec58d6787b584a7a44ee643a097f7990c27b7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46f9cc9be1967bdc4f2ef2359ec58d6787b584a7a44ee643a097f7990c27b7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46f9cc9be1967bdc4f2ef2359ec58d6787b584a7a44ee643a097f7990c27b7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:11 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46f9cc9be1967bdc4f2ef2359ec58d6787b584a7a44ee643a097f7990c27b7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.41831905 +0000 UTC m=+0.180940580 container init eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.431626163 +0000 UTC m=+0.194247673 container start eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.434768129 +0000 UTC m=+0.197389639 container attach eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:13:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:11 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ad10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]: {
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:    "0": [
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:        {
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "devices": [
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "/dev/loop3"
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            ],
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "lv_name": "ceph_lv0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "lv_size": "21470642176",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "name": "ceph_lv0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "tags": {
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.cluster_name": "ceph",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.crush_device_class": "",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.encrypted": "0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.osd_id": "0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.type": "block",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.vdo": "0",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:                "ceph.with_tpm": "0"
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            },
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "type": "block",
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:            "vg_name": "ceph_vg0"
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:        }
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]:    ]
Dec  7 05:13:11 np0005549474 elastic_taussig[271996]: }
Dec  7 05:13:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:11.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:11 np0005549474 systemd[1]: libpod-eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58.scope: Deactivated successfully.
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.712466945 +0000 UTC m=+0.475088495 container died eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:13:11 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b46f9cc9be1967bdc4f2ef2359ec58d6787b584a7a44ee643a097f7990c27b7d-merged.mount: Deactivated successfully.
Dec  7 05:13:11 np0005549474 podman[271978]: 2025-12-07 10:13:11.757241895 +0000 UTC m=+0.519863415 container remove eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:13:11 np0005549474 systemd[1]: libpod-conmon-eda6f72f1118def226034f37235181d99bdd9da29a696badd1f164c4777dcd58.scope: Deactivated successfully.
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.405598251 +0000 UTC m=+0.067639815 container create 640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 05:13:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:13:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:13:12 np0005549474 systemd[1]: Started libpod-conmon-640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198.scope.
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.375813319 +0000 UTC m=+0.037854933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:13:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.486127055 +0000 UTC m=+0.148168629 container init 640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 05:13:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:13:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.493615409 +0000 UTC m=+0.155656943 container start 640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.496774155 +0000 UTC m=+0.158815719 container attach 640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 05:13:12 np0005549474 quizzical_gould[272123]: 167 167
Dec  7 05:13:12 np0005549474 systemd[1]: libpod-640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198.scope: Deactivated successfully.
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.501088363 +0000 UTC m=+0.163129907 container died 640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:13:12 np0005549474 systemd[1]: var-lib-containers-storage-overlay-574156a184b114a2ec91a13cd4049da2457dee875c73185e856ab44a72a61de5-merged.mount: Deactivated successfully.
Dec  7 05:13:12 np0005549474 podman[272107]: 2025-12-07 10:13:12.544153516 +0000 UTC m=+0.206195060 container remove 640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gould, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:13:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:13:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:13:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:13:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:13:12 np0005549474 systemd[1]: libpod-conmon-640cf53dd4bdeceee94a58fa1522cb29c0570f16a70bd22e7607e8e47dae3198.scope: Deactivated successfully.
Dec  7 05:13:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:13:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:12 np0005549474 podman[272145]: 2025-12-07 10:13:12.760881571 +0000 UTC m=+0.056762918 container create 752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_euler, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 05:13:12 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:12 np0005549474 nova_compute[256753]: 2025-12-07 10:13:12.788 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:12 np0005549474 podman[272145]: 2025-12-07 10:13:12.735257243 +0000 UTC m=+0.031138600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:13:12 np0005549474 systemd[1]: Started libpod-conmon-752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8.scope.
Dec  7 05:13:12 np0005549474 nova_compute[256753]: 2025-12-07 10:13:12.906 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:12 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:13:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1e62aba5021cfad6f9d4ba3da480e952a4c8a8f49dba04f018cf60ec98f934/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1e62aba5021cfad6f9d4ba3da480e952a4c8a8f49dba04f018cf60ec98f934/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1e62aba5021cfad6f9d4ba3da480e952a4c8a8f49dba04f018cf60ec98f934/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:12 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1e62aba5021cfad6f9d4ba3da480e952a4c8a8f49dba04f018cf60ec98f934/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:13:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:12 np0005549474 podman[272145]: 2025-12-07 10:13:12.952157993 +0000 UTC m=+0.248039400 container init 752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:13:12 np0005549474 podman[272145]: 2025-12-07 10:13:12.962081393 +0000 UTC m=+0.257962710 container start 752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:13:12 np0005549474 podman[272145]: 2025-12-07 10:13:12.964798447 +0000 UTC m=+0.260679884 container attach 752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_euler, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  7 05:13:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:12 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  7 05:13:13 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:13 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:13 np0005549474 lvm[272240]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:13:13 np0005549474 lvm[272240]: VG ceph_vg0 finished
Dec  7 05:13:13 np0005549474 nervous_euler[272162]: {}
Dec  7 05:13:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:13:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:13:13 np0005549474 lvm[272244]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:13:13 np0005549474 lvm[272244]: VG ceph_vg0 finished
Dec  7 05:13:13 np0005549474 systemd[1]: libpod-752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8.scope: Deactivated successfully.
Dec  7 05:13:13 np0005549474 systemd[1]: libpod-752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8.scope: Consumed 1.211s CPU time.
Dec  7 05:13:13 np0005549474 podman[272145]: 2025-12-07 10:13:13.713108236 +0000 UTC m=+1.008989583 container died 752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:13:13 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2d1e62aba5021cfad6f9d4ba3da480e952a4c8a8f49dba04f018cf60ec98f934-merged.mount: Deactivated successfully.
Dec  7 05:13:13 np0005549474 podman[272145]: 2025-12-07 10:13:13.778718344 +0000 UTC m=+1.074599661 container remove 752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_euler, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 05:13:13 np0005549474 systemd[1]: libpod-conmon-752b96e4f1cdb4eb220d0243eebdcf63bad493d3d92e27c58d89f1e7508ceea8.scope: Deactivated successfully.
Dec  7 05:13:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:13:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:13:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:14 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:14 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:13:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:14 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ad30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:14.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Dec  7 05:13:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:15 np0005549474 nova_compute[256753]: 2025-12-07 10:13:15.459 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:13:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:15 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:13:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:15.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:16 np0005549474 nova_compute[256753]: 2025-12-07 10:13:16.191 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:16 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ad50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec  7 05:13:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:17.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:13:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:17.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:13:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:17 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:17.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:17 np0005549474 nova_compute[256753]: 2025-12-07 10:13:17.778 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:18 np0005549474 nova_compute[256753]: 2025-12-07 10:13:18.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:18 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:18.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec  7 05:13:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:19 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:13:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:19.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:13:19 np0005549474 nova_compute[256753]: 2025-12-07 10:13:19.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:19] "GET /metrics HTTP/1.1" 200 48369 "" "Prometheus/2.51.0"
Dec  7 05:13:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:19] "GET /metrics HTTP/1.1" 200 48369 "" "Prometheus/2.51.0"
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.461 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:20 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.785 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.786 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.786 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.786 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:13:20 np0005549474 nova_compute[256753]: 2025-12-07 10:13:20.787 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:13:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:20.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.119 256757 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765102386.1185625, daa0d61c-ce51-4a65-82e0-106c2654ed92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.119 256757 INFO nova.compute.manager [-] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] VM Stopped (Lifecycle Event)#033[00m
Dec  7 05:13:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 1.1 KiB/s wr, 8 op/s
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.171 256757 DEBUG nova.compute.manager [None req-bf996fde-e3b9-4b02-8b96-6f0894395114 - - - - - -] [instance: daa0d61c-ce51-4a65-82e0-106c2654ed92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.192 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:13:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503149317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.333 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:13:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:21 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.540 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.542 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4546MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.542 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.542 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.603 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.603 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:13:21 np0005549474 nova_compute[256753]: 2025-12-07 10:13:21.620 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:13:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:21.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:13:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576818489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:13:22 np0005549474 nova_compute[256753]: 2025-12-07 10:13:22.076 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:13:22 np0005549474 nova_compute[256753]: 2025-12-07 10:13:22.084 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:13:22 np0005549474 nova_compute[256753]: 2025-12-07 10:13:22.099 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:13:22 np0005549474 nova_compute[256753]: 2025-12-07 10:13:22.117 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:13:22 np0005549474 nova_compute[256753]: 2025-12-07 10:13:22.118 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:22 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:22 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800adb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Dec  7 05:13:23 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:23 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:24 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:24.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800add0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.118 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.118 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.119 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.119 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.139 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.140 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.141 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:13:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 341 B/s wr, 173 op/s
Dec  7 05:13:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:25 np0005549474 nova_compute[256753]: 2025-12-07 10:13:25.464 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:25 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:13:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:26 np0005549474 nova_compute[256753]: 2025-12-07 10:13:26.230 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:26 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:26.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec  7 05:13:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:27.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:13:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:13:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:13:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:27 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:27.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:28 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec  7 05:13:29 np0005549474 podman[272366]: 2025-12-07 10:13:29.248888577 +0000 UTC m=+0.058824054 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  7 05:13:29 np0005549474 podman[272367]: 2025-12-07 10:13:29.303154965 +0000 UTC m=+0.110981754 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 05:13:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:29 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:29.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:29] "GET /metrics HTTP/1.1" 200 48374 "" "Prometheus/2.51.0"
Dec  7 05:13:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:29] "GET /metrics HTTP/1.1" 200 48374 "" "Prometheus/2.51.0"
Dec  7 05:13:30 np0005549474 nova_compute[256753]: 2025-12-07 10:13:30.467 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:30 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ae30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:30.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Dec  7 05:13:31 np0005549474 nova_compute[256753]: 2025-12-07 10:13:31.233 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:31 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:31.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:32 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:32 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:32.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ae50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec  7 05:13:33 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:33 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:33.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:34 np0005549474 podman[272417]: 2025-12-07 10:13:34.294189514 +0000 UTC m=+0.104626861 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  7 05:13:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:34 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:34.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 1.8 MiB/s wr, 200 op/s
Dec  7 05:13:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f159800ae70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:35 np0005549474 nova_compute[256753]: 2025-12-07 10:13:35.468 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:35 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:13:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:36 np0005549474 nova_compute[256753]: 2025-12-07 10:13:36.236 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:36 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:36.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:37.160Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:13:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:37.160Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:13:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:37.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:13:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:13:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:37 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f158c0015b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:37.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:38.624 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:13:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:38.625 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:13:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:13:38.625 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:13:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:38 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:38.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec  7 05:13:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:39 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:39] "GET /metrics HTTP/1.1" 200 48374 "" "Prometheus/2.51.0"
Dec  7 05:13:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:39] "GET /metrics HTTP/1.1" 200 48374 "" "Prometheus/2.51.0"
Dec  7 05:13:40 np0005549474 nova_compute[256753]: 2025-12-07 10:13:40.470 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:40 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:40.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  7 05:13:41 np0005549474 nova_compute[256753]: 2025-12-07 10:13:41.239 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:41 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:13:42
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:13:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:13:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:13:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:42 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:42 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:13:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:13:42 np0005549474 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  7 05:13:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:42.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1568001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  7 05:13:43 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:43 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec  7 05:13:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 50 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:45 np0005549474 nova_compute[256753]: 2025-12-07 10:13:45.471 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:45 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:13:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:46 np0005549474 nova_compute[256753]: 2025-12-07 10:13:46.241 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:46 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580000e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:47.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:13:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  7 05:13:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:47 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:47.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:48 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580000e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  7 05:13:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:49 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:49] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Dec  7 05:13:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:49] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Dec  7 05:13:50 np0005549474 ovn_controller[154296]: 2025-12-07T10:13:50Z|00068|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  7 05:13:50 np0005549474 nova_compute[256753]: 2025-12-07 10:13:50.493 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:50 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 13 KiB/s wr, 33 op/s
Dec  7 05:13:51 np0005549474 nova_compute[256753]: 2025-12-07 10:13:51.243 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:51 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1580000e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:52 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec  7 05:13:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:53 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:53.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:54 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800026f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:54.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  7 05:13:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:55 np0005549474 nova_compute[256753]: 2025-12-07 10:13:55.495 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:55 : epoch 69355203 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:13:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:55.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:56 np0005549474 nova_compute[256753]: 2025-12-07 10:13:56.280 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:13:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:56 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:56.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800026f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:57.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:13:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:57.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:13:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:13:57.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:13:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec  7 05:13:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:13:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:13:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:57 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:13:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:13:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:13:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:58 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:58.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 71 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec  7 05:13:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:13:59 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800026f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:13:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:13:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:13:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:59.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:13:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:59] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:13:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:13:59] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:14:00 np0005549474 podman[272493]: 2025-12-07 10:14:00.27803164 +0000 UTC m=+0.086582639 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  7 05:14:00 np0005549474 podman[272494]: 2025-12-07 10:14:00.384121751 +0000 UTC m=+0.186768999 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  7 05:14:00 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:00.421 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:14:00 np0005549474 nova_compute[256753]: 2025-12-07 10:14:00.422 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:00 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:00.422 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:14:00 np0005549474 nova_compute[256753]: 2025-12-07 10:14:00.497 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:14:00 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1570004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:14:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:00.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:14:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15880025e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:14:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:01 np0005549474 nova_compute[256753]: 2025-12-07 10:14:01.282 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:14:01 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f156400c050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:14:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:02 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:14:02 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800026f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:14:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:03.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:14:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f15800026f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 05:14:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:14:03 np0005549474 kernel: ganesha.nfsd[271461]: segfault at 50 ip 00007f16484cd32e sp 00007f16167fb210 error 4 in libntirpc.so.5.8[7f16484b2000+2c000] likely on CPU 3 (core 0, socket 3)
Dec  7 05:14:03 np0005549474 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  7 05:14:03 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[264117]: 07/12/2025 10:14:03 : epoch 69355203 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1588002600 fd 48 proxy ignored for local
Dec  7 05:14:03 np0005549474 systemd[1]: Started Process Core Dump (PID 272544/UID 0).
Dec  7 05:14:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:04 np0005549474 podman[272570]: 2025-12-07 10:14:04.63594183 +0000 UTC m=+0.076871905 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:14:04 np0005549474 systemd-coredump[272545]: Process 264121 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 75:#012#0  0x00007f16484cd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  7 05:14:04 np0005549474 systemd[1]: systemd-coredump@10-272544-0.service: Deactivated successfully.
Dec  7 05:14:04 np0005549474 systemd[1]: systemd-coredump@10-272544-0.service: Consumed 1.223s CPU time.
Dec  7 05:14:04 np0005549474 podman[272596]: 2025-12-07 10:14:04.962315713 +0000 UTC m=+0.027079719 container died 86b15150039c9d7eeb4706ed22070546c97ca48fbdf6de36b8fff2ce0af601ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 05:14:04 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5d2af02a37708124f2e96d463c298575434318746b8e8668762d9879d7717b61-merged.mount: Deactivated successfully.
Dec  7 05:14:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:05.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:05 np0005549474 podman[272596]: 2025-12-07 10:14:05.010870686 +0000 UTC m=+0.075634612 container remove 86b15150039c9d7eeb4706ed22070546c97ca48fbdf6de36b8fff2ce0af601ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 05:14:05 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Main process exited, code=exited, status=139/n/a
Dec  7 05:14:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:05 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Failed with result 'exit-code'.
Dec  7 05:14:05 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 2.369s CPU time.
Dec  7 05:14:05 np0005549474 nova_compute[256753]: 2025-12-07 10:14:05.535 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:06 np0005549474 nova_compute[256753]: 2025-12-07 10:14:06.284 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:14:07.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:14:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  7 05:14:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:09.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec  7 05:14:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [WARNING] 340/101409 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 05:14:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq[98151]: [ALERT] 340/101409 (4) : backend 'backend' has no server available!
Dec  7 05:14:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:09.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:09] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:14:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:09] "GET /metrics HTTP/1.1" 200 48380 "" "Prometheus/2.51.0"
Dec  7 05:14:10 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:10.424 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:10 np0005549474 nova_compute[256753]: 2025-12-07 10:14:10.538 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:11.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 560 KiB/s wr, 77 op/s
Dec  7 05:14:11 np0005549474 nova_compute[256753]: 2025-12-07 10:14:11.286 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:11.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:14:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:14:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:14:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:14:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:14:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:14:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:14:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:14:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:13.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  7 05:14:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:13.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:15.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:14:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  7 05:14:15 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Scheduled restart job, restart counter is at 11.
Dec  7 05:14:15 np0005549474 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 05:14:15 np0005549474 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.2.0.compute-0.bjrqrk.service: Consumed 2.369s CPU time.
Dec  7 05:14:15 np0005549474 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:14:15 np0005549474 nova_compute[256753]: 2025-12-07 10:14:15.571 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:15 np0005549474 podman[272839]: 2025-12-07 10:14:15.667083672 +0000 UTC m=+0.083339052 container create a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 05:14:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865804c5c30ab8f54783f4ca5242abbe5c64d47444fd76b642635403b30ceb7b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865804c5c30ab8f54783f4ca5242abbe5c64d47444fd76b642635403b30ceb7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865804c5c30ab8f54783f4ca5242abbe5c64d47444fd76b642635403b30ceb7b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:15 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865804c5c30ab8f54783f4ca5242abbe5c64d47444fd76b642635403b30ceb7b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.bjrqrk-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:15 np0005549474 podman[272839]: 2025-12-07 10:14:15.729525913 +0000 UTC m=+0.145781303 container init a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:14:15 np0005549474 podman[272839]: 2025-12-07 10:14:15.73453148 +0000 UTC m=+0.150786850 container start a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:14:15 np0005549474 podman[272839]: 2025-12-07 10:14:15.643742365 +0000 UTC m=+0.059997775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:15 np0005549474 bash[272839]: a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 05:14:15 np0005549474 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.bjrqrk for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 05:14:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:15.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.809489872 +0000 UTC m=+0.036711511 container create 4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_wozniak, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:14:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:15 np0005549474 systemd[1]: Started libpod-conmon-4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76.scope.
Dec  7 05:14:15 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.887167358 +0000 UTC m=+0.114389007 container init 4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.793416904 +0000 UTC m=+0.020638553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.893226424 +0000 UTC m=+0.120448053 container start 4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_wozniak, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.89639217 +0000 UTC m=+0.123613799 container attach 4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:14:15 np0005549474 pensive_wozniak[272939]: 167 167
Dec  7 05:14:15 np0005549474 systemd[1]: libpod-4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76.scope: Deactivated successfully.
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.898373693 +0000 UTC m=+0.125595322 container died 4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:14:15 np0005549474 systemd[1]: var-lib-containers-storage-overlay-cc4131c54384394dbb9fc29724a0c41fcba0e29a35f4e75814455ef96604e6e8-merged.mount: Deactivated successfully.
Dec  7 05:14:15 np0005549474 podman[272886]: 2025-12-07 10:14:15.935551497 +0000 UTC m=+0.162773126 container remove 4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_wozniak, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:14:15 np0005549474 systemd[1]: libpod-conmon-4cf59d875c434b08f52a65cc53bbaf9a1fd4c634a9e2eec4d0fc405a54981b76.scope: Deactivated successfully.
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.077598687 +0000 UTC m=+0.037544824 container create e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 05:14:16 np0005549474 systemd[1]: Started libpod-conmon-e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7.scope.
Dec  7 05:14:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731694c4491da1ba522f9568ae2b90a5caebbb55229be62b31fd65905794c8f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731694c4491da1ba522f9568ae2b90a5caebbb55229be62b31fd65905794c8f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731694c4491da1ba522f9568ae2b90a5caebbb55229be62b31fd65905794c8f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731694c4491da1ba522f9568ae2b90a5caebbb55229be62b31fd65905794c8f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731694c4491da1ba522f9568ae2b90a5caebbb55229be62b31fd65905794c8f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.061738025 +0000 UTC m=+0.021684182 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.157957276 +0000 UTC m=+0.117903433 container init e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.163761675 +0000 UTC m=+0.123707812 container start e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.166802547 +0000 UTC m=+0.126748684 container attach e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:14:16 np0005549474 nova_compute[256753]: 2025-12-07 10:14:16.288 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:16 np0005549474 jovial_swirles[272979]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:14:16 np0005549474 jovial_swirles[272979]: --> All data devices are unavailable
Dec  7 05:14:16 np0005549474 systemd[1]: libpod-e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7.scope: Deactivated successfully.
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.544792796 +0000 UTC m=+0.504738993 container died e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:14:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-731694c4491da1ba522f9568ae2b90a5caebbb55229be62b31fd65905794c8f2-merged.mount: Deactivated successfully.
Dec  7 05:14:16 np0005549474 podman[272962]: 2025-12-07 10:14:16.585181787 +0000 UTC m=+0.545127924 container remove e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_swirles, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:14:16 np0005549474 systemd[1]: libpod-conmon-e95876183a416f07451c17d3ee6b40c7feed80f2f423cbd9576fa9cef94de2e7.scope: Deactivated successfully.
Dec  7 05:14:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:17.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:14:17.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:14:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 100 op/s
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.183497669 +0000 UTC m=+0.060058257 container create a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:14:17 np0005549474 systemd[1]: Started libpod-conmon-a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda.scope.
Dec  7 05:14:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.162189378 +0000 UTC m=+0.038749956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.268712382 +0000 UTC m=+0.145272950 container init a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.279150015 +0000 UTC m=+0.155710603 container start a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.283967337 +0000 UTC m=+0.160527915 container attach a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 05:14:17 np0005549474 mystifying_joliot[273115]: 167 167
Dec  7 05:14:17 np0005549474 systemd[1]: libpod-a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda.scope: Deactivated successfully.
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.286968648 +0000 UTC m=+0.163529246 container died a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Dec  7 05:14:17 np0005549474 systemd[1]: var-lib-containers-storage-overlay-18ec006aef6ed28a7901990a1b624f5a57c1ab7e53644ca5763649e6b3369933-merged.mount: Deactivated successfully.
Dec  7 05:14:17 np0005549474 podman[273098]: 2025-12-07 10:14:17.332629872 +0000 UTC m=+0.209190470 container remove a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 05:14:17 np0005549474 systemd[1]: libpod-conmon-a8c472a8d6a3e42bf01844d13160c9a5c2075cc571a4c1c7031f504c74d7ebda.scope: Deactivated successfully.
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.538984375 +0000 UTC m=+0.059662166 container create 6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:14:17 np0005549474 systemd[1]: Started libpod-conmon-6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc.scope.
Dec  7 05:14:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4d04005980c0b8000c05a00209aa9078f77f4e09826ad3d166fd1ac074ce1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.515757922 +0000 UTC m=+0.036435763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4d04005980c0b8000c05a00209aa9078f77f4e09826ad3d166fd1ac074ce1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4d04005980c0b8000c05a00209aa9078f77f4e09826ad3d166fd1ac074ce1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4d04005980c0b8000c05a00209aa9078f77f4e09826ad3d166fd1ac074ce1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.624688321 +0000 UTC m=+0.145366162 container init 6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hawking, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.636179854 +0000 UTC m=+0.156857605 container start 6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.639374171 +0000 UTC m=+0.160051962 container attach 6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 05:14:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:17 np0005549474 happy_hawking[273155]: {
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:    "0": [
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:        {
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "devices": [
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "/dev/loop3"
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            ],
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "lv_name": "ceph_lv0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "lv_size": "21470642176",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "name": "ceph_lv0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "tags": {
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.cluster_name": "ceph",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.crush_device_class": "",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.encrypted": "0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.osd_id": "0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.type": "block",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.vdo": "0",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:                "ceph.with_tpm": "0"
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            },
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "type": "block",
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:            "vg_name": "ceph_vg0"
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:        }
Dec  7 05:14:17 np0005549474 happy_hawking[273155]:    ]
Dec  7 05:14:17 np0005549474 happy_hawking[273155]: }
Dec  7 05:14:17 np0005549474 systemd[1]: libpod-6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc.scope: Deactivated successfully.
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.92011485 +0000 UTC m=+0.440792641 container died 6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:14:17 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ed4d04005980c0b8000c05a00209aa9078f77f4e09826ad3d166fd1ac074ce1c-merged.mount: Deactivated successfully.
Dec  7 05:14:17 np0005549474 podman[273139]: 2025-12-07 10:14:17.962032532 +0000 UTC m=+0.482710283 container remove 6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hawking, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:14:17 np0005549474 systemd[1]: libpod-conmon-6f39f184e73be3fe593cfb45fffc8b91d62030dd7d24ce9769228703b67627cc.scope: Deactivated successfully.
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.615526587 +0000 UTC m=+0.063430189 container create 3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:14:18 np0005549474 systemd[1]: Started libpod-conmon-3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea.scope.
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.589476078 +0000 UTC m=+0.037379740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:18 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.713339142 +0000 UTC m=+0.161242744 container init 3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.726872871 +0000 UTC m=+0.174776463 container start 3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.731316413 +0000 UTC m=+0.179220055 container attach 3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 05:14:18 np0005549474 dreamy_goldberg[273281]: 167 167
Dec  7 05:14:18 np0005549474 systemd[1]: libpod-3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea.scope: Deactivated successfully.
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.734987432 +0000 UTC m=+0.182891024 container died 3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_goldberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 05:14:18 np0005549474 nova_compute[256753]: 2025-12-07 10:14:18.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a9d382b53afb8915662de3d1f168e30f33074db31e7151fb843f12a7d715f6ea-merged.mount: Deactivated successfully.
Dec  7 05:14:18 np0005549474 podman[273265]: 2025-12-07 10:14:18.787798701 +0000 UTC m=+0.235702303 container remove 3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 05:14:18 np0005549474 systemd[1]: libpod-conmon-3a9526b270c40c0b47281e3edc125b19329ca5c5d365bb41f3283d4d4724e7ea.scope: Deactivated successfully.
Dec  7 05:14:18 np0005549474 podman[273306]: 2025-12-07 10:14:18.99254407 +0000 UTC m=+0.053684653 container create 0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:14:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:19.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:19 np0005549474 systemd[1]: Started libpod-conmon-0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165.scope.
Dec  7 05:14:19 np0005549474 podman[273306]: 2025-12-07 10:14:18.967369204 +0000 UTC m=+0.028509877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:14:19 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efa883eaabab2351609913f57680120b5ab5665cdf44e9363c9f1ee26d8a367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efa883eaabab2351609913f57680120b5ab5665cdf44e9363c9f1ee26d8a367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efa883eaabab2351609913f57680120b5ab5665cdf44e9363c9f1ee26d8a367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efa883eaabab2351609913f57680120b5ab5665cdf44e9363c9f1ee26d8a367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:19 np0005549474 podman[273306]: 2025-12-07 10:14:19.105677082 +0000 UTC m=+0.166817695 container init 0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jepsen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:14:19 np0005549474 podman[273306]: 2025-12-07 10:14:19.120852526 +0000 UTC m=+0.181993119 container start 0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 05:14:19 np0005549474 podman[273306]: 2025-12-07 10:14:19.124576627 +0000 UTC m=+0.185717230 container attach 0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 05:14:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 100 op/s
Dec  7 05:14:19 np0005549474 nova_compute[256753]: 2025-12-07 10:14:19.755 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:19 np0005549474 lvm[273398]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:14:19 np0005549474 lvm[273398]: VG ceph_vg0 finished
Dec  7 05:14:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:19 np0005549474 recursing_jepsen[273323]: {}
Dec  7 05:14:19 np0005549474 systemd[1]: libpod-0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165.scope: Deactivated successfully.
Dec  7 05:14:19 np0005549474 systemd[1]: libpod-0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165.scope: Consumed 1.315s CPU time.
Dec  7 05:14:19 np0005549474 podman[273306]: 2025-12-07 10:14:19.887362691 +0000 UTC m=+0.948503284 container died 0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:14:19 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0efa883eaabab2351609913f57680120b5ab5665cdf44e9363c9f1ee26d8a367-merged.mount: Deactivated successfully.
Dec  7 05:14:19 np0005549474 podman[273306]: 2025-12-07 10:14:19.939097191 +0000 UTC m=+1.000237774 container remove 0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:14:19 np0005549474 systemd[1]: libpod-conmon-0ef6fdd1c4e03b04992393908d67bd034dde879daa0e19f75a85ca63e8b36165.scope: Deactivated successfully.
Dec  7 05:14:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:19] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Dec  7 05:14:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:19] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Dec  7 05:14:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:14:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:14:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.575 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.777 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.777 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.778 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.778 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:14:20 np0005549474 nova_compute[256753]: 2025-12-07 10:14:20.779 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:21 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:21 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:14:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.003000081s ======
Dec  7 05:14:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:21.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Dec  7 05:14:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 723 KiB/s rd, 1.3 KiB/s wr, 52 op/s
Dec  7 05:14:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:14:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928876254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.255 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.334 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.499 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.501 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4534MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.501 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.501 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:21.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.953 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.954 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:14:21 np0005549474 nova_compute[256753]: 2025-12-07 10:14:21.970 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:14:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539702002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:14:22 np0005549474 nova_compute[256753]: 2025-12-07 10:14:22.393 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:22 np0005549474 nova_compute[256753]: 2025-12-07 10:14:22.401 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:14:22 np0005549474 nova_compute[256753]: 2025-12-07 10:14:22.466 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:14:22 np0005549474 nova_compute[256753]: 2025-12-07 10:14:22.469 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:14:22 np0005549474 nova_compute[256753]: 2025-12-07 10:14:22.469 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:23.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.3 KiB/s wr, 27 op/s
Dec  7 05:14:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:23.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.470 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.471 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.774 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:14:24 np0005549474 nova_compute[256753]: 2025-12-07 10:14:24.774 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Dec  7 05:14:25 np0005549474 nova_compute[256753]: 2025-12-07 10:14:25.578 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:25 np0005549474 nova_compute[256753]: 2025-12-07 10:14:25.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:25 np0005549474 nova_compute[256753]: 2025-12-07 10:14:25.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:26 np0005549474 nova_compute[256753]: 2025-12-07 10:14:26.378 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:27.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:14:27.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:14:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec  7 05:14:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:14:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:14:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:27.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:29.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec  7 05:14:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:29.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:29] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:14:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:29] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:14:30 np0005549474 nova_compute[256753]: 2025-12-07 10:14:30.606 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:30 np0005549474 nova_compute[256753]: 2025-12-07 10:14:30.749 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:14:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Dec  7 05:14:31 np0005549474 podman[273520]: 2025-12-07 10:14:31.336902434 +0000 UTC m=+0.134530676 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:14:31 np0005549474 podman[273521]: 2025-12-07 10:14:31.376569956 +0000 UTC m=+0.173887220 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 05:14:31 np0005549474 nova_compute[256753]: 2025-12-07 10:14:31.379 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:31.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:33.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec  7 05:14:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:33.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:35.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s
Dec  7 05:14:35 np0005549474 podman[273570]: 2025-12-07 10:14:35.276496046 +0000 UTC m=+0.088134123 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 05:14:35 np0005549474 nova_compute[256753]: 2025-12-07 10:14:35.642 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:35.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:36 np0005549474 nova_compute[256753]: 2025-12-07 10:14:36.381 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:37.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:14:37.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:14:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:14:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:37.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:38.626 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:38.627 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:38.627 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:14:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:39.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:39] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:14:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:39] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec  7 05:14:40 np0005549474 nova_compute[256753]: 2025-12-07 10:14:40.645 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:41.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:14:41 np0005549474 nova_compute[256753]: 2025-12-07 10:14:41.417 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:41.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:14:42
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', 'images', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:14:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:14:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:14:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:14:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:14:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:43.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:14:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:43.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:44 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 05:14:44 np0005549474 nova_compute[256753]: 2025-12-07 10:14:44.984 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:44 np0005549474 nova_compute[256753]: 2025-12-07 10:14:44.985 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.005 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  7 05:14:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:45.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.090 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.090 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.098 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.099 256757 INFO nova.compute.claims [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  7 05:14:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.503 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:45 np0005549474 nova_compute[256753]: 2025-12-07 10:14:45.687 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:45.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:14:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797059465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.029 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.037 256757 DEBUG nova.compute.provider_tree [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.067 256757 DEBUG nova.scheduler.client.report [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.113 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.114 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.198 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.199 256757 DEBUG nova.network.neutron [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.217 256757 INFO nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.238 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.323 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.325 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.325 256757 INFO nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Creating image(s)#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.365 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.405 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.442 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.446 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.471 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.550 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.551 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.552 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.553 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.592 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.597 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:46 np0005549474 nova_compute[256753]: 2025-12-07 10:14:46.629 256757 DEBUG nova.policy [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:14:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:47.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:14:47.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.167 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.259 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] resizing rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.397 256757 DEBUG nova.objects.instance [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'migration_context' on Instance uuid b6ed365e-3ce3-4449-8967-f77cf4a1dd55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.416 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.417 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Ensure instance console log exists: /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.417 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.418 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:47 np0005549474 nova_compute[256753]: 2025-12-07 10:14:47.418 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.781642) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487781671, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1394, "num_deletes": 255, "total_data_size": 2484077, "memory_usage": 2514304, "flush_reason": "Manual Compaction"}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487803184, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2439275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27000, "largest_seqno": 28393, "table_properties": {"data_size": 2432856, "index_size": 3554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13549, "raw_average_key_size": 19, "raw_value_size": 2419908, "raw_average_value_size": 3461, "num_data_blocks": 157, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102360, "oldest_key_time": 1765102360, "file_creation_time": 1765102487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21658 microseconds, and 9782 cpu microseconds.
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.803291) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2439275 bytes OK
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.803319) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.805253) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.805277) EVENT_LOG_v1 {"time_micros": 1765102487805270, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.805303) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2478000, prev total WAL file size 2478000, number of live WAL files 2.
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.806396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2382KB)], [59(14MB)]
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487806441, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17454464, "oldest_snapshot_seqno": -1}
Dec  7 05:14:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:47.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6131 keys, 17319724 bytes, temperature: kUnknown
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487940089, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17319724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17275392, "index_size": 27902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 156186, "raw_average_key_size": 25, "raw_value_size": 17161759, "raw_average_value_size": 2799, "num_data_blocks": 1143, "num_entries": 6131, "num_filter_entries": 6131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.940460) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17319724 bytes
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.941832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.4 rd, 129.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 14.3 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 6657, records dropped: 526 output_compression: NoCompression
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.941860) EVENT_LOG_v1 {"time_micros": 1765102487941847, "job": 32, "event": "compaction_finished", "compaction_time_micros": 133810, "compaction_time_cpu_micros": 54405, "output_level": 6, "num_output_files": 1, "total_output_size": 17319724, "num_input_records": 6657, "num_output_records": 6131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487942649, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487947233, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.806338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:14:47 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:14:48 np0005549474 nova_compute[256753]: 2025-12-07 10:14:48.157 256757 DEBUG nova.network.neutron [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Successfully created port: 25408c74-10d1-48c0-a582-3f2bab5f5128 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:14:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.610 256757 DEBUG nova.network.neutron [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Successfully updated port: 25408c74-10d1-48c0-a582-3f2bab5f5128 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.695 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.696 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.696 256757 DEBUG nova.network.neutron [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.734 256757 DEBUG nova.compute.manager [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-changed-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.735 256757 DEBUG nova.compute.manager [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Refreshing instance network info cache due to event network-changed-25408c74-10d1-48c0-a582-3f2bab5f5128. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:14:49 np0005549474 nova_compute[256753]: 2025-12-07 10:14:49.735 256757 DEBUG oslo_concurrency.lockutils [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:14:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:49.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:49] "GET /metrics HTTP/1.1" 200 48377 "" "Prometheus/2.51.0"
Dec  7 05:14:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:49] "GET /metrics HTTP/1.1" 200 48377 "" "Prometheus/2.51.0"
Dec  7 05:14:50 np0005549474 nova_compute[256753]: 2025-12-07 10:14:50.375 256757 DEBUG nova.network.neutron [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:14:50 np0005549474 nova_compute[256753]: 2025-12-07 10:14:50.689 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.474 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:14:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.931 256757 DEBUG nova.network.neutron [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updating instance_info_cache with network_info: [{"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.961 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.961 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Instance network_info: |[{"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.961 256757 DEBUG oslo_concurrency.lockutils [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.962 256757 DEBUG nova.network.neutron [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Refreshing network info cache for port 25408c74-10d1-48c0-a582-3f2bab5f5128 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.964 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Start _get_guest_xml network_info=[{"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': 'af7b5730-2fa9-449f-8ccb-a9519582f1b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.970 256757 WARNING nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.975 256757 DEBUG nova.virt.libvirt.host [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.976 256757 DEBUG nova.virt.libvirt.host [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.988 256757 DEBUG nova.virt.libvirt.host [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.990 256757 DEBUG nova.virt.libvirt.host [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.990 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.991 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-07T10:06:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bc1a767b-c985-4370-b41e-5cb294d603d7',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.992 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.992 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.993 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.993 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.994 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.995 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.995 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.996 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.996 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  7 05:14:51 np0005549474 nova_compute[256753]: 2025-12-07 10:14:51.997 256757 DEBUG nova.virt.hardware [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.002 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:14:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786809290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.496 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.523 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.527 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.983 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.984 256757 DEBUG nova.virt.libvirt.vif [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2120887576',display_name='tempest-TestNetworkBasicOps-server-2120887576',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2120887576',id=10,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE22Oz5EJp5+ZpDrGnLc9mUFXRfTxAvMWx8J3lunbrmI60ZtyQDIv1NM8RmqNVILrRljAuiOgvN9z44Juq39ak6Q7u5G54nLys2aiSQuSXLufah5ku1wQTiEUWTZTK/3NQ==',key_name='tempest-TestNetworkBasicOps-2034535873',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-sp71l7u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:14:46Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=b6ed365e-3ce3-4449-8967-f77cf4a1dd55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.985 256757 DEBUG nova.network.os_vif_util [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.986 256757 DEBUG nova.network.os_vif_util [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:14:52 np0005549474 nova_compute[256753]: 2025-12-07 10:14:52.987 256757 DEBUG nova.objects.instance [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6ed365e-3ce3-4449-8967-f77cf4a1dd55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.000 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] End _get_guest_xml xml=<domain type="kvm">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <uuid>b6ed365e-3ce3-4449-8967-f77cf4a1dd55</uuid>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <name>instance-0000000a</name>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <memory>131072</memory>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <vcpu>1</vcpu>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:name>tempest-TestNetworkBasicOps-server-2120887576</nova:name>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:creationTime>2025-12-07 10:14:51</nova:creationTime>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:flavor name="m1.nano">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:memory>128</nova:memory>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:disk>1</nova:disk>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:swap>0</nova:swap>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:vcpus>1</nova:vcpus>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </nova:flavor>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:owner>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </nova:owner>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <nova:ports>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <nova:port uuid="25408c74-10d1-48c0-a582-3f2bab5f5128">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        </nova:port>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </nova:ports>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </nova:instance>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <sysinfo type="smbios">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <entry name="manufacturer">RDO</entry>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <entry name="product">OpenStack Compute</entry>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <entry name="serial">b6ed365e-3ce3-4449-8967-f77cf4a1dd55</entry>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <entry name="uuid">b6ed365e-3ce3-4449-8967-f77cf4a1dd55</entry>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <entry name="family">Virtual Machine</entry>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <boot dev="hd"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <smbios mode="sysinfo"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <vmcoreinfo/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <clock offset="utc">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <timer name="pit" tickpolicy="delay"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <timer name="hpet" present="no"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <cpu mode="host-model" match="exact">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <topology sockets="1" cores="1" threads="1"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <disk type="network" device="disk">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <target dev="vda" bus="virtio"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <disk type="network" device="cdrom">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk.config">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <target dev="sda" bus="sata"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <interface type="ethernet">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <mac address="fa:16:3e:53:0f:61"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <mtu size="1442"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <target dev="tap25408c74-10"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <serial type="pty">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <log file="/var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/console.log" append="off"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <input type="tablet" bus="usb"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <rng model="virtio">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <backend model="random">/dev/urandom</backend>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <controller type="usb" index="0"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    <memballoon model="virtio">
Dec  7 05:14:53 np0005549474 nova_compute[256753]:      <stats period="10"/>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:14:53 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:14:53 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:14:53 np0005549474 nova_compute[256753]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.001 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Preparing to wait for external event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.002 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.002 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.003 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.004 256757 DEBUG nova.virt.libvirt.vif [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2120887576',display_name='tempest-TestNetworkBasicOps-server-2120887576',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2120887576',id=10,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE22Oz5EJp5+ZpDrGnLc9mUFXRfTxAvMWx8J3lunbrmI60ZtyQDIv1NM8RmqNVILrRljAuiOgvN9z44Juq39ak6Q7u5G54nLys2aiSQuSXLufah5ku1wQTiEUWTZTK/3NQ==',key_name='tempest-TestNetworkBasicOps-2034535873',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-sp71l7u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:14:46Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=b6ed365e-3ce3-4449-8967-f77cf4a1dd55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.004 256757 DEBUG nova.network.os_vif_util [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.005 256757 DEBUG nova.network.os_vif_util [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.006 256757 DEBUG os_vif [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.007 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.008 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.008 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.012 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.013 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25408c74-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.013 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25408c74-10, col_values=(('external_ids', {'iface-id': '25408c74-10d1-48c0-a582-3f2bab5f5128', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:0f:61', 'vm-uuid': 'b6ed365e-3ce3-4449-8967-f77cf4a1dd55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.015 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 NetworkManager[49051]: <info>  [1765102493.0170] manager: (tap25408c74-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.018 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.023 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.024 256757 INFO os_vif [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10')#033[00m
Dec  7 05:14:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:53.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.080 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.081 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.081 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:53:0f:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.082 256757 INFO nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Using config drive#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.112 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.409 256757 INFO nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Creating config drive at /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/disk.config#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.418 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4t3y_c00 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.449 256757 DEBUG nova.network.neutron [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updated VIF entry in instance network info cache for port 25408c74-10d1-48c0-a582-3f2bab5f5128. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.451 256757 DEBUG nova.network.neutron [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updating instance_info_cache with network_info: [{"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.465 256757 DEBUG oslo_concurrency.lockutils [req-32c88f83-b289-4e51-a60c-ddbe46198655 req-0101a707-2f36-47c6-a1ff-91794b3da251 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.560 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4t3y_c00" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.588 256757 DEBUG nova.storage.rbd_utils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.592 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/disk.config b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.773 256757 DEBUG oslo_concurrency.processutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/disk.config b6ed365e-3ce3-4449-8967-f77cf4a1dd55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.774 256757 INFO nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Deleting local config drive /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55/disk.config because it was imported into RBD.#033[00m
Dec  7 05:14:53 np0005549474 systemd[1]: Starting libvirt secret daemon...
Dec  7 05:14:53 np0005549474 systemd[1]: Started libvirt secret daemon.
Dec  7 05:14:53 np0005549474 kernel: tap25408c74-10: entered promiscuous mode
Dec  7 05:14:53 np0005549474 NetworkManager[49051]: <info>  [1765102493.8517] manager: (tap25408c74-10): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Dec  7 05:14:53 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:53Z|00069|binding|INFO|Claiming lport 25408c74-10d1-48c0-a582-3f2bab5f5128 for this chassis.
Dec  7 05:14:53 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:53Z|00070|binding|INFO|25408c74-10d1-48c0-a582-3f2bab5f5128: Claiming fa:16:3e:53:0f:61 10.100.0.12
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.852 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.858 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.859 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.868 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0f:61 10.100.0.12'], port_security=['fa:16:3e:53:0f:61 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b6ed365e-3ce3-4449-8967-f77cf4a1dd55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e656fae-5de0-4f81-b63c-0344bc822186', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f2f5133-4d73-4f69-94ff-97d3a392bfa3, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=25408c74-10d1-48c0-a582-3f2bab5f5128) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.870 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 25408c74-10d1-48c0-a582-3f2bab5f5128 in datapath 16f1b908-2cde-48b9-a6af-8fd9e0b59fbd bound to our chassis#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.872 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16f1b908-2cde-48b9-a6af-8fd9e0b59fbd#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.884 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[577fead4-d216-4d0f-9d05-7c989d4c80bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.885 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16f1b908-21 in ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:14:53 np0005549474 systemd-machined[217882]: New machine qemu-4-instance-0000000a.
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.887 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16f1b908-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.887 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[d3adb9b6-604e-496c-88c0-eec11331a259]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.889 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfe2d61-f2ec-4ee3-9551-d5671bdd12f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:53 np0005549474 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.903 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[fd449cec-33a5-4de9-818f-6f2d45b9d3db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.917 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 systemd-udevd[273979]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:14:53 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:53Z|00071|binding|INFO|Setting lport 25408c74-10d1-48c0-a582-3f2bab5f5128 ovn-installed in OVS
Dec  7 05:14:53 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:53Z|00072|binding|INFO|Setting lport 25408c74-10d1-48c0-a582-3f2bab5f5128 up in Southbound
Dec  7 05:14:53 np0005549474 nova_compute[256753]: 2025-12-07 10:14:53.923 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.931 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c6146f-2e56-4c80-a465-bbfb65a6ed0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:53 np0005549474 NetworkManager[49051]: <info>  [1765102493.9387] device (tap25408c74-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:14:53 np0005549474 NetworkManager[49051]: <info>  [1765102493.9404] device (tap25408c74-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.965 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[135e46ae-c90e-4baf-90b6-722dc9006459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:53 np0005549474 NetworkManager[49051]: <info>  [1765102493.9718] manager: (tap16f1b908-20): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Dec  7 05:14:53 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:53.971 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[50971921-33f9-4db3-8a98-558bb97f88eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.002 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0fd5b9-a5bb-4d7f-81a9-9a355564e3a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.004 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[b19acb62-4810-4134-924a-0f57c3b2c8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 NetworkManager[49051]: <info>  [1765102494.0305] device (tap16f1b908-20): carrier: link connected
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.039 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd7fc29-41ae-4988-9306-ce1ce79b6844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.062 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[5f72cc47-d63d-4081-a58a-f6c2e8d08e76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f1b908-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:95:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445669, 'reachable_time': 27067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274009, 'error': None, 'target': 'ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.076 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[ecaa5640-8910-40c7-b46a-7454eb8cbfd8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:95c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445669, 'tstamp': 445669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274010, 'error': None, 'target': 'ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.090 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac093fd-38fb-4618-8192-eb3ea2835f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f1b908-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:95:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445669, 'reachable_time': 27067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274011, 'error': None, 'target': 'ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.114 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[fb65a2b8-e0b8-4137-93f6-24c3df0660c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.160 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[dd614ec0-10b1-4ef3-a755-f297c1de0c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.161 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f1b908-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.161 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.161 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16f1b908-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.183 256757 DEBUG nova.compute.manager [req-ac990119-760a-4529-bfa6-6ac1d9453e92 req-59ae6f3f-b98c-40d8-9a8d-2493b54bd418 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.183 256757 DEBUG oslo_concurrency.lockutils [req-ac990119-760a-4529-bfa6-6ac1d9453e92 req-59ae6f3f-b98c-40d8-9a8d-2493b54bd418 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.183 256757 DEBUG oslo_concurrency.lockutils [req-ac990119-760a-4529-bfa6-6ac1d9453e92 req-59ae6f3f-b98c-40d8-9a8d-2493b54bd418 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.184 256757 DEBUG oslo_concurrency.lockutils [req-ac990119-760a-4529-bfa6-6ac1d9453e92 req-59ae6f3f-b98c-40d8-9a8d-2493b54bd418 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.184 256757 DEBUG nova.compute.manager [req-ac990119-760a-4529-bfa6-6ac1d9453e92 req-59ae6f3f-b98c-40d8-9a8d-2493b54bd418 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Processing event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  7 05:14:54 np0005549474 kernel: tap16f1b908-20: entered promiscuous mode
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.202 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16f1b908-20, col_values=(('external_ids', {'iface-id': '766ac3bf-abeb-4072-b663-e8af9253a9f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:14:54 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:54Z|00073|binding|INFO|Releasing lport 766ac3bf-abeb-4072-b663-e8af9253a9f9 from this chassis (sb_readonly=0)
Dec  7 05:14:54 np0005549474 NetworkManager[49051]: <info>  [1765102494.2058] manager: (tap16f1b908-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.200 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.206 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16f1b908-2cde-48b9-a6af-8fd9e0b59fbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16f1b908-2cde-48b9-a6af-8fd9e0b59fbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.206 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[f25fc36a-d109-407a-874e-e54b0ce37183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.207 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/16f1b908-2cde-48b9-a6af-8fd9e0b59fbd.pid.haproxy
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID 16f1b908-2cde-48b9-a6af-8fd9e0b59fbd
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:14:54 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:14:54.208 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'env', 'PROCESS_TAG=haproxy-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16f1b908-2cde-48b9-a6af-8fd9e0b59fbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.219 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:54 np0005549474 podman[274043]: 2025-12-07 10:14:54.603991872 +0000 UTC m=+0.064248871 container create 81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:14:54 np0005549474 systemd[1]: Started libpod-conmon-81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306.scope.
Dec  7 05:14:54 np0005549474 podman[274043]: 2025-12-07 10:14:54.563348584 +0000 UTC m=+0.023605643 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:14:54 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:14:54 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c432a818283061693325b571e613a1932bcb4b765aa6d9ec0cf99fb0e556df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:14:54 np0005549474 podman[274043]: 2025-12-07 10:14:54.704752318 +0000 UTC m=+0.165009297 container init 81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:14:54 np0005549474 podman[274043]: 2025-12-07 10:14:54.723059096 +0000 UTC m=+0.183316065 container start 81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:14:54 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [NOTICE]   (274079) : New worker (274090) forked
Dec  7 05:14:54 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [NOTICE]   (274079) : Loading success.
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.875 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102494.8746905, b6ed365e-3ce3-4449-8967-f77cf4a1dd55 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.875 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] VM Started (Lifecycle Event)#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.878 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.881 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.884 256757 INFO nova.virt.libvirt.driver [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Instance spawned successfully.#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.884 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.905 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.910 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.913 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.914 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.914 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.915 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.915 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.916 256757 DEBUG nova.virt.libvirt.driver [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.934 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.934 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102494.87478, b6ed365e-3ce3-4449-8967-f77cf4a1dd55 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.934 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] VM Paused (Lifecycle Event)#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.968 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.972 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102494.8805401, b6ed365e-3ce3-4449-8967-f77cf4a1dd55 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.972 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] VM Resumed (Lifecycle Event)#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.997 256757 INFO nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Took 8.67 seconds to spawn the instance on the hypervisor.#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.998 256757 DEBUG nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:14:54 np0005549474 nova_compute[256753]: 2025-12-07 10:14:54.998 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:14:55 np0005549474 nova_compute[256753]: 2025-12-07 10:14:55.005 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:14:55 np0005549474 nova_compute[256753]: 2025-12-07 10:14:55.036 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:14:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:55.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:55 np0005549474 nova_compute[256753]: 2025-12-07 10:14:55.073 256757 INFO nova.compute.manager [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Took 10.01 seconds to build instance.#033[00m
Dec  7 05:14:55 np0005549474 nova_compute[256753]: 2025-12-07 10:14:55.092 256757 DEBUG oslo_concurrency.lockutils [None req-096af8d7-3676-4dc0-8287-e7a49cdf6d6c 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:55 np0005549474 nova_compute[256753]: 2025-12-07 10:14:55.691 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:14:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:14:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:14:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:14:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:14:56 np0005549474 nova_compute[256753]: 2025-12-07 10:14:56.283 256757 DEBUG nova.compute.manager [req-090a9dda-6789-4ca6-a933-13a274061e62 req-67f8db53-6d66-4180-bbf4-91b888a795b2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:14:56 np0005549474 nova_compute[256753]: 2025-12-07 10:14:56.283 256757 DEBUG oslo_concurrency.lockutils [req-090a9dda-6789-4ca6-a933-13a274061e62 req-67f8db53-6d66-4180-bbf4-91b888a795b2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:14:56 np0005549474 nova_compute[256753]: 2025-12-07 10:14:56.284 256757 DEBUG oslo_concurrency.lockutils [req-090a9dda-6789-4ca6-a933-13a274061e62 req-67f8db53-6d66-4180-bbf4-91b888a795b2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:14:56 np0005549474 nova_compute[256753]: 2025-12-07 10:14:56.284 256757 DEBUG oslo_concurrency.lockutils [req-090a9dda-6789-4ca6-a933-13a274061e62 req-67f8db53-6d66-4180-bbf4-91b888a795b2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:14:56 np0005549474 nova_compute[256753]: 2025-12-07 10:14:56.284 256757 DEBUG nova.compute.manager [req-090a9dda-6789-4ca6-a933-13a274061e62 req-67f8db53-6d66-4180-bbf4-91b888a795b2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] No waiting events found dispatching network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:14:56 np0005549474 nova_compute[256753]: 2025-12-07 10:14:56.285 256757 WARNING nova.compute.manager [req-090a9dda-6789-4ca6-a933-13a274061e62 req-67f8db53-6d66-4180-bbf4-91b888a795b2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received unexpected event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:14:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:57.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:14:57.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:14:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:14:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:14:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:14:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:14:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:57.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:58 np0005549474 nova_compute[256753]: 2025-12-07 10:14:58.017 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:58 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:58Z|00074|binding|INFO|Releasing lport 766ac3bf-abeb-4072-b663-e8af9253a9f9 from this chassis (sb_readonly=0)
Dec  7 05:14:58 np0005549474 NetworkManager[49051]: <info>  [1765102498.8739] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  7 05:14:58 np0005549474 NetworkManager[49051]: <info>  [1765102498.8760] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  7 05:14:58 np0005549474 nova_compute[256753]: 2025-12-07 10:14:58.874 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:58 np0005549474 ovn_controller[154296]: 2025-12-07T10:14:58Z|00075|binding|INFO|Releasing lport 766ac3bf-abeb-4072-b663-e8af9253a9f9 from this chassis (sb_readonly=0)
Dec  7 05:14:58 np0005549474 nova_compute[256753]: 2025-12-07 10:14:58.887 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:58 np0005549474 nova_compute[256753]: 2025-12-07 10:14:58.897 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:14:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:59.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  7 05:14:59 np0005549474 nova_compute[256753]: 2025-12-07 10:14:59.331 256757 DEBUG nova.compute.manager [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-changed-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:14:59 np0005549474 nova_compute[256753]: 2025-12-07 10:14:59.332 256757 DEBUG nova.compute.manager [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Refreshing instance network info cache due to event network-changed-25408c74-10d1-48c0-a582-3f2bab5f5128. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:14:59 np0005549474 nova_compute[256753]: 2025-12-07 10:14:59.332 256757 DEBUG oslo_concurrency.lockutils [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:14:59 np0005549474 nova_compute[256753]: 2025-12-07 10:14:59.332 256757 DEBUG oslo_concurrency.lockutils [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:14:59 np0005549474 nova_compute[256753]: 2025-12-07 10:14:59.332 256757 DEBUG nova.network.neutron [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Refreshing network info cache for port 25408c74-10d1-48c0-a582-3f2bab5f5128 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:14:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:14:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:14:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:59.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:14:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:59] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Dec  7 05:14:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:14:59] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Dec  7 05:15:00 np0005549474 nova_compute[256753]: 2025-12-07 10:15:00.606 256757 DEBUG nova.network.neutron [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updated VIF entry in instance network info cache for port 25408c74-10d1-48c0-a582-3f2bab5f5128. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:15:00 np0005549474 nova_compute[256753]: 2025-12-07 10:15:00.607 256757 DEBUG nova.network.neutron [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updating instance_info_cache with network_info: [{"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:00 np0005549474 nova_compute[256753]: 2025-12-07 10:15:00.640 256757 DEBUG oslo_concurrency.lockutils [req-d43679f1-150b-4fa7-8da2-25caadb85d0f req-41bc5a63-9f39-4d7f-ba1c-5676ba8011c5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:15:00 np0005549474 nova_compute[256753]: 2025-12-07 10:15:00.693 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:15:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:01.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:15:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  7 05:15:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:01.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:02 np0005549474 podman[274125]: 2025-12-07 10:15:02.252323109 +0000 UTC m=+0.062478375 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec  7 05:15:02 np0005549474 podman[274126]: 2025-12-07 10:15:02.308466332 +0000 UTC m=+0.117371174 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:15:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:03 np0005549474 nova_compute[256753]: 2025-12-07 10:15:03.020 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:03.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  7 05:15:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:05.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec  7 05:15:05 np0005549474 nova_compute[256753]: 2025-12-07 10:15:05.696 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:06 np0005549474 podman[274201]: 2025-12-07 10:15:06.294473673 +0000 UTC m=+0.102091780 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  7 05:15:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:07.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:15:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  7 05:15:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:08 np0005549474 nova_compute[256753]: 2025-12-07 10:15:08.080 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:08 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:08Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:0f:61 10.100.0.12
Dec  7 05:15:08 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:08Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0f:61 10.100.0.12
Dec  7 05:15:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 121 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec  7 05:15:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:09.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:09] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Dec  7 05:15:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:09] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Dec  7 05:15:10 np0005549474 nova_compute[256753]: 2025-12-07 10:15:10.698 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:11.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 121 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=cleanup t=2025-12-07T10:15:11.705838756Z level=info msg="Completed cleanup jobs" duration=54.885659ms
Dec  7 05:15:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana.update.checker t=2025-12-07T10:15:11.90990455Z level=info msg="Update check succeeded" duration=172.96313ms
Dec  7 05:15:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugins.update.checker t=2025-12-07T10:15:11.921801943Z level=info msg="Update check succeeded" duration=189.708725ms
Dec  7 05:15:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:15:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:15:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:15:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:15:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:15:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:15:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:15:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:15:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:13.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:13 np0005549474 nova_compute[256753]: 2025-12-07 10:15:13.115 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 121 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  7 05:15:13 np0005549474 nova_compute[256753]: 2025-12-07 10:15:13.841 256757 INFO nova.compute.manager [None req-214d7c97-49f2-49ed-a70b-3085523836fd 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Get console output#033[00m
Dec  7 05:15:13 np0005549474 nova_compute[256753]: 2025-12-07 10:15:13.845 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:15:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:14 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:14Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0f:61 10.100.0.12
Dec  7 05:15:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:15.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  7 05:15:15 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:15.632 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:15:15 np0005549474 nova_compute[256753]: 2025-12-07 10:15:15.633 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:15 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:15.633 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:15:15 np0005549474 nova_compute[256753]: 2025-12-07 10:15:15.701 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:15.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:17.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:17.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:15:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  7 05:15:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:17.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:18 np0005549474 nova_compute[256753]: 2025-12-07 10:15:18.152 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:18 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:18Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0f:61 10.100.0.12
Dec  7 05:15:18 np0005549474 nova_compute[256753]: 2025-12-07 10:15:18.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:19.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec  7 05:15:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:19.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:19] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Dec  7 05:15:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:19] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Dec  7 05:15:20 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:20Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0f:61 10.100.0.12
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.708 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.839 256757 DEBUG nova.compute.manager [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-changed-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.839 256757 DEBUG nova.compute.manager [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Refreshing instance network info cache due to event network-changed-25408c74-10d1-48c0-a582-3f2bab5f5128. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.840 256757 DEBUG oslo_concurrency.lockutils [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.840 256757 DEBUG oslo_concurrency.lockutils [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.841 256757 DEBUG nova.network.neutron [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Refreshing network info cache for port 25408c74-10d1-48c0-a582-3f2bab5f5128 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.875 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.875 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.876 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.876 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.876 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.877 256757 INFO nova.compute.manager [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Terminating instance#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.879 256757 DEBUG nova.compute.manager [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  7 05:15:20 np0005549474 kernel: tap25408c74-10 (unregistering): left promiscuous mode
Dec  7 05:15:20 np0005549474 NetworkManager[49051]: <info>  [1765102520.9349] device (tap25408c74-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.945 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:20 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:20Z|00076|binding|INFO|Releasing lport 25408c74-10d1-48c0-a582-3f2bab5f5128 from this chassis (sb_readonly=0)
Dec  7 05:15:20 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:20Z|00077|binding|INFO|Setting lport 25408c74-10d1-48c0-a582-3f2bab5f5128 down in Southbound
Dec  7 05:15:20 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:20Z|00078|binding|INFO|Removing iface tap25408c74-10 ovn-installed in OVS
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.948 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:20 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:20.953 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0f:61 10.100.0.12'], port_security=['fa:16:3e:53:0f:61 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b6ed365e-3ce3-4449-8967-f77cf4a1dd55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e656fae-5de0-4f81-b63c-0344bc822186', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f2f5133-4d73-4f69-94ff-97d3a392bfa3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=25408c74-10d1-48c0-a582-3f2bab5f5128) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:15:20 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:20.955 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 25408c74-10d1-48c0-a582-3f2bab5f5128 in datapath 16f1b908-2cde-48b9-a6af-8fd9e0b59fbd unbound from our chassis#033[00m
Dec  7 05:15:20 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:20.956 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:15:20 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:20.957 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[19442c93-b104-42e0-a595-b4537fb25c4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:20 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:20.958 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd namespace which is not needed anymore#033[00m
Dec  7 05:15:20 np0005549474 nova_compute[256753]: 2025-12-07 10:15:20.973 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:20 np0005549474 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  7 05:15:20 np0005549474 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 15.207s CPU time.
Dec  7 05:15:20 np0005549474 systemd-machined[217882]: Machine qemu-4-instance-0000000a terminated.
Dec  7 05:15:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:21 np0005549474 kernel: tap25408c74-10: entered promiscuous mode
Dec  7 05:15:21 np0005549474 kernel: tap25408c74-10 (unregistering): left promiscuous mode
Dec  7 05:15:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:21.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:21 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [NOTICE]   (274079) : haproxy version is 2.8.14-c23fe91
Dec  7 05:15:21 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [NOTICE]   (274079) : path to executable is /usr/sbin/haproxy
Dec  7 05:15:21 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [WARNING]  (274079) : Exiting Master process...
Dec  7 05:15:21 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [WARNING]  (274079) : Exiting Master process...
Dec  7 05:15:21 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [ALERT]    (274079) : Current worker (274090) exited with code 143 (Terminated)
Dec  7 05:15:21 np0005549474 neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd[274059]: [WARNING]  (274079) : All workers exited. Exiting... (0)
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.104 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 systemd[1]: libpod-81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306.scope: Deactivated successfully.
Dec  7 05:15:21 np0005549474 podman[274331]: 2025-12-07 10:15:21.11315372 +0000 UTC m=+0.046917803 container died 81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.116 256757 INFO nova.virt.libvirt.driver [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Instance destroyed successfully.#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.117 256757 DEBUG nova.objects.instance [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'resources' on Instance uuid b6ed365e-3ce3-4449-8967-f77cf4a1dd55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.140 256757 DEBUG nova.virt.libvirt.vif [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2120887576',display_name='tempest-TestNetworkBasicOps-server-2120887576',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2120887576',id=10,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE22Oz5EJp5+ZpDrGnLc9mUFXRfTxAvMWx8J3lunbrmI60ZtyQDIv1NM8RmqNVILrRljAuiOgvN9z44Juq39ak6Q7u5G54nLys2aiSQuSXLufah5ku1wQTiEUWTZTK/3NQ==',key_name='tempest-TestNetworkBasicOps-2034535873',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:14:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-sp71l7u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:14:55Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=b6ed365e-3ce3-4449-8967-f77cf4a1dd55,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.144 256757 DEBUG nova.network.os_vif_util [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.145 256757 DEBUG nova.network.os_vif_util [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.146 256757 DEBUG os_vif [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:15:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306-userdata-shm.mount: Deactivated successfully.
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.151 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 systemd[1]: var-lib-containers-storage-overlay-50c432a818283061693325b571e613a1932bcb4b765aa6d9ec0cf99fb0e556df-merged.mount: Deactivated successfully.
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.152 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25408c74-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.155 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.156 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.159 256757 INFO os_vif [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:0f:61,bridge_name='br-int',has_traffic_filtering=True,id=25408c74-10d1-48c0-a582-3f2bab5f5128,network=Network(16f1b908-2cde-48b9-a6af-8fd9e0b59fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25408c74-10')#033[00m
Dec  7 05:15:21 np0005549474 podman[274331]: 2025-12-07 10:15:21.170836404 +0000 UTC m=+0.104600457 container cleanup 81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec  7 05:15:21 np0005549474 systemd[1]: libpod-conmon-81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306.scope: Deactivated successfully.
Dec  7 05:15:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 12 KiB/s wr, 6 op/s
Dec  7 05:15:21 np0005549474 podman[274397]: 2025-12-07 10:15:21.240616786 +0000 UTC m=+0.050712336 container remove 81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.247 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5b2537-c32b-4711-9b95-bb350cf2f439]: (4, ('Sun Dec  7 10:15:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd (81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306)\n81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306\nSun Dec  7 10:15:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd (81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306)\n81dbd51cad04be1358d1a41aadfb8f7724da966f3a1709bee6c66ddcf8a8c306\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.249 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0c240ff2-396d-42e8-8473-08a3b7cf63cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.250 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f1b908-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.252 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 kernel: tap16f1b908-20: left promiscuous mode
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.271 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.272 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.273 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[9216469e-6087-4211-804d-699e5d1ad269]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.285 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[44f54cf9-3940-4351-a207-c1e7aec434c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.286 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0a719335-722c-4a13-8a86-edff86f949b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.301 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb88c93-607b-4e49-a52d-777f5e761aba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445661, 'reachable_time': 37433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274440, 'error': None, 'target': 'ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.303 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16f1b908-2cde-48b9-a6af-8fd9e0b59fbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:15:21 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:21.304 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[06678af8-b4fb-41c4-ba7c-fac3e786baf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:21 np0005549474 systemd[1]: run-netns-ovnmeta\x2d16f1b908\x2d2cde\x2d48b9\x2da6af\x2d8fd9e0b59fbd.mount: Deactivated successfully.
Dec  7 05:15:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.594 256757 INFO nova.virt.libvirt.driver [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Deleting instance files /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55_del#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.595 256757 INFO nova.virt.libvirt.driver [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Deletion of /var/lib/nova/instances/b6ed365e-3ce3-4449-8967-f77cf4a1dd55_del complete#033[00m
Dec  7 05:15:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:15:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 05:15:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.641 256757 INFO nova.compute.manager [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.642 256757 DEBUG oslo.service.loopingcall [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.642 256757 DEBUG nova.compute.manager [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.642 256757 DEBUG nova.network.neutron [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  7 05:15:21 np0005549474 nova_compute[256753]: 2025-12-07 10:15:21.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:21.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:22 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:22 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:22 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.623 256757 DEBUG nova.network.neutron [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.646 256757 INFO nova.compute.manager [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Took 1.00 seconds to deallocate network for instance.#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.726 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.727 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.776 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.810 256757 DEBUG oslo_concurrency.processutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.862 256757 DEBUG nova.network.neutron [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updated VIF entry in instance network info cache for port 25408c74-10d1-48c0-a582-3f2bab5f5128. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.863 256757 DEBUG nova.network.neutron [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updating instance_info_cache with network_info: [{"id": "25408c74-10d1-48c0-a582-3f2bab5f5128", "address": "fa:16:3e:53:0f:61", "network": {"id": "16f1b908-2cde-48b9-a6af-8fd9e0b59fbd", "bridge": "br-int", "label": "tempest-network-smoke--225126562", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25408c74-10", "ovs_interfaceid": "25408c74-10d1-48c0-a582-3f2bab5f5128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.883 256757 DEBUG oslo_concurrency.lockutils [req-5f1f20f6-fce7-41ad-8b25-5c222e74c8af req-43d2e466-626b-4a0d-9506-fb8d62f58aa2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-b6ed365e-3ce3-4449-8967-f77cf4a1dd55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.946 256757 DEBUG nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-vif-unplugged-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.947 256757 DEBUG oslo_concurrency.lockutils [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.947 256757 DEBUG oslo_concurrency.lockutils [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.948 256757 DEBUG oslo_concurrency.lockutils [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.948 256757 DEBUG nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] No waiting events found dispatching network-vif-unplugged-25408c74-10d1-48c0-a582-3f2bab5f5128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.948 256757 WARNING nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received unexpected event network-vif-unplugged-25408c74-10d1-48c0-a582-3f2bab5f5128 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.949 256757 DEBUG nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.949 256757 DEBUG oslo_concurrency.lockutils [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.950 256757 DEBUG oslo_concurrency.lockutils [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.950 256757 DEBUG oslo_concurrency.lockutils [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.951 256757 DEBUG nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] No waiting events found dispatching network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.951 256757 WARNING nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received unexpected event network-vif-plugged-25408c74-10d1-48c0-a582-3f2bab5f5128 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.951 256757 DEBUG nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Received event network-vif-deleted-25408c74-10d1-48c0-a582-3f2bab5f5128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.952 256757 INFO nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Neutron deleted interface 25408c74-10d1-48c0-a582-3f2bab5f5128; detaching it from the instance and deleting it from the info cache#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.952 256757 DEBUG nova.network.neutron [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:22 np0005549474 nova_compute[256753]: 2025-12-07 10:15:22.981 256757 DEBUG nova.compute.manager [req-8f2f3ff6-b6e8-403e-ae0c-d836b12f39f6 req-95ceef6c-885c-4f70-adcd-abc58506bc0e ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Detach interface failed, port_id=25408c74-10d1-48c0-a582-3f2bab5f5128, reason: Instance b6ed365e-3ce3-4449-8967-f77cf4a1dd55 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  7 05:15:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:23.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 12 KiB/s wr, 6 op/s
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211854813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.267 256757 DEBUG oslo_concurrency.processutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.272 256757 DEBUG nova.compute.provider_tree [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.288 256757 DEBUG nova.scheduler.client.report [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.320 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.323 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.323 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.323 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.323 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.363 256757 INFO nova.scheduler.client.report [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Deleted allocations for instance b6ed365e-3ce3-4449-8967-f77cf4a1dd55#033[00m
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.429 256757 DEBUG oslo_concurrency.lockutils [None req-2b683210-38f8-41b8-9e10-141e42dbe77e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "b6ed365e-3ce3-4449-8967-f77cf4a1dd55" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:15:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211758301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.762 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:23.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.973 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.975 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4594MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.975 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:23 np0005549474 nova_compute[256753]: 2025-12-07 10:15:23.975 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.028 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.028 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.045 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230914401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.511 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.519 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.548 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.579 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:15:24 np0005549474 nova_compute[256753]: 2025-12-07 10:15:24.579 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:24 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:24.635 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:15:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 20 KiB/s wr, 36 op/s
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:15:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:15:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:15:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:25.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.40006203 +0000 UTC m=+0.063802851 container create 06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 05:15:25 np0005549474 systemd[1]: Started libpod-conmon-06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53.scope.
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.374520968 +0000 UTC m=+0.038261849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:15:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.489094155 +0000 UTC m=+0.152835036 container init 06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.50072762 +0000 UTC m=+0.164468411 container start 06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.503933657 +0000 UTC m=+0.167674478 container attach 06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:15:25 np0005549474 vibrant_allen[274690]: 167 167
Dec  7 05:15:25 np0005549474 systemd[1]: libpod-06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53.scope: Deactivated successfully.
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.510046293 +0000 UTC m=+0.173787124 container died 06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:15:25 np0005549474 systemd[1]: var-lib-containers-storage-overlay-49a0707100ed8def382fff9a8f27382624fad72b92f0135a14553d8f0beb7a18-merged.mount: Deactivated successfully.
Dec  7 05:15:25 np0005549474 podman[274673]: 2025-12-07 10:15:25.560963534 +0000 UTC m=+0.224704365 container remove 06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_allen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:15:25 np0005549474 systemd[1]: libpod-conmon-06b17c9ab4db860ad94195a3a20096863025eef3efcbf86a1c85e752363d8b53.scope: Deactivated successfully.
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.578 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.579 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.579 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.606 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.607 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.607 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:25 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:15:25 np0005549474 nova_compute[256753]: 2025-12-07 10:15:25.756 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:25 np0005549474 podman[274715]: 2025-12-07 10:15:25.789007718 +0000 UTC m=+0.082076146 container create 341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:15:25 np0005549474 systemd[1]: Started libpod-conmon-341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820.scope.
Dec  7 05:15:25 np0005549474 podman[274715]: 2025-12-07 10:15:25.771894564 +0000 UTC m=+0.064963012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:15:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6e01e319a5926e3b4ac882f259bbd369cfd51fdeb95f24216ecac26fa87ac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6e01e319a5926e3b4ac882f259bbd369cfd51fdeb95f24216ecac26fa87ac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6e01e319a5926e3b4ac882f259bbd369cfd51fdeb95f24216ecac26fa87ac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6e01e319a5926e3b4ac882f259bbd369cfd51fdeb95f24216ecac26fa87ac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6e01e319a5926e3b4ac882f259bbd369cfd51fdeb95f24216ecac26fa87ac2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:25.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:25 np0005549474 podman[274715]: 2025-12-07 10:15:25.925121289 +0000 UTC m=+0.218189767 container init 341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:15:25 np0005549474 podman[274715]: 2025-12-07 10:15:25.935382107 +0000 UTC m=+0.228450535 container start 341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_khayyam, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 05:15:25 np0005549474 podman[274715]: 2025-12-07 10:15:25.938679166 +0000 UTC m=+0.231747644 container attach 341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_khayyam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 05:15:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:26 np0005549474 nova_compute[256753]: 2025-12-07 10:15:26.155 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:26 np0005549474 amazing_khayyam[274731]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:15:26 np0005549474 amazing_khayyam[274731]: --> All data devices are unavailable
Dec  7 05:15:26 np0005549474 systemd[1]: libpod-341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820.scope: Deactivated successfully.
Dec  7 05:15:26 np0005549474 podman[274746]: 2025-12-07 10:15:26.330949054 +0000 UTC m=+0.025193035 container died 341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_khayyam, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:15:26 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3a6e01e319a5926e3b4ac882f259bbd369cfd51fdeb95f24216ecac26fa87ac2-merged.mount: Deactivated successfully.
Dec  7 05:15:26 np0005549474 podman[274746]: 2025-12-07 10:15:26.378524554 +0000 UTC m=+0.072768495 container remove 341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:15:26 np0005549474 systemd[1]: libpod-conmon-341610ff6d5fc83127af0bcb8d98aeebc4a223a71a504d10c33a62b39b7a6820.scope: Deactivated successfully.
Dec  7 05:15:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 31 op/s
Dec  7 05:15:26 np0005549474 nova_compute[256753]: 2025-12-07 10:15:26.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:26 np0005549474 ceph-mon[74516]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.052235134 +0000 UTC m=+0.054545651 container create f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:15:27 np0005549474 systemd[1]: Started libpod-conmon-f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67.scope.
Dec  7 05:15:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:27.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.029859077 +0000 UTC m=+0.032169654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:15:27 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.135331068 +0000 UTC m=+0.137641605 container init f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.142472801 +0000 UTC m=+0.144783308 container start f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.145352009 +0000 UTC m=+0.147662526 container attach f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euclid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:15:27 np0005549474 tender_euclid[274868]: 167 167
Dec  7 05:15:27 np0005549474 systemd[1]: libpod-f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67.scope: Deactivated successfully.
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.147795025 +0000 UTC m=+0.150105532 container died f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 05:15:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:27.171Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:15:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:27.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:15:27 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a2ac6e2ecd92fee56eee19d595a408080e7db6022daf3751f244fe84d457cef1-merged.mount: Deactivated successfully.
Dec  7 05:15:27 np0005549474 podman[274852]: 2025-12-07 10:15:27.186559496 +0000 UTC m=+0.188870003 container remove f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 05:15:27 np0005549474 systemd[1]: libpod-conmon-f628053277a871b4f0bc11d41becb67535b885021a093b66ae7d73d566e59a67.scope: Deactivated successfully.
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.416642926 +0000 UTC m=+0.069532047 container create ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bassi, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 05:15:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  7 05:15:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:15:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:15:27 np0005549474 systemd[1]: Started libpod-conmon-ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8.scope.
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.387687051 +0000 UTC m=+0.040576212 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:15:27 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadd491567be1b5e1020183160a12c4a3656598f68e5931d179a542da2704721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadd491567be1b5e1020183160a12c4a3656598f68e5931d179a542da2704721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadd491567be1b5e1020183160a12c4a3656598f68e5931d179a542da2704721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:27 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadd491567be1b5e1020183160a12c4a3656598f68e5931d179a542da2704721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.537425821 +0000 UTC m=+0.190314942 container init ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bassi, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.549856869 +0000 UTC m=+0.202745950 container start ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bassi, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.553362173 +0000 UTC m=+0.206251324 container attach ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bassi, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:15:27 np0005549474 nova_compute[256753]: 2025-12-07 10:15:27.748 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:27 np0005549474 nova_compute[256753]: 2025-12-07 10:15:27.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:15:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]: {
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:    "0": [
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:        {
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "devices": [
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "/dev/loop3"
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            ],
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "lv_name": "ceph_lv0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "lv_size": "21470642176",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "name": "ceph_lv0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "tags": {
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.cluster_name": "ceph",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.crush_device_class": "",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.encrypted": "0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.osd_id": "0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.type": "block",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.vdo": "0",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:                "ceph.with_tpm": "0"
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            },
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "type": "block",
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:            "vg_name": "ceph_vg0"
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:        }
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]:    ]
Dec  7 05:15:27 np0005549474 trusting_bassi[274909]: }
Dec  7 05:15:27 np0005549474 systemd[1]: libpod-ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8.scope: Deactivated successfully.
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.871097439 +0000 UTC m=+0.523986540 container died ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:15:27 np0005549474 systemd[1]: var-lib-containers-storage-overlay-aadd491567be1b5e1020183160a12c4a3656598f68e5931d179a542da2704721-merged.mount: Deactivated successfully.
Dec  7 05:15:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:27 np0005549474 podman[274892]: 2025-12-07 10:15:27.922975386 +0000 UTC m=+0.575864447 container remove ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bassi, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:15:27 np0005549474 systemd[1]: libpod-conmon-ee7d60631c783d510909f0e141a62df775bd892931cbc6e6a748ef712ac8c7f8.scope: Deactivated successfully.
Dec  7 05:15:27 np0005549474 nova_compute[256753]: 2025-12-07 10:15:27.954 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:28 np0005549474 nova_compute[256753]: 2025-12-07 10:15:28.093 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:28 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.652099178 +0000 UTC m=+0.069795884 container create 861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:15:28 np0005549474 systemd[1]: Started libpod-conmon-861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e.scope.
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.624035517 +0000 UTC m=+0.041732273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:15:28 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 31 op/s
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.745929492 +0000 UTC m=+0.163626228 container init 861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.755934384 +0000 UTC m=+0.173631090 container start 861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.759651854 +0000 UTC m=+0.177348560 container attach 861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:15:28 np0005549474 xenodochial_mestorf[275040]: 167 167
Dec  7 05:15:28 np0005549474 systemd[1]: libpod-861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e.scope: Deactivated successfully.
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.764184898 +0000 UTC m=+0.181881594 container died 861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:15:28 np0005549474 systemd[1]: var-lib-containers-storage-overlay-85db7d83cde98974865f2ed6cfc65a211ac1baca3d478ed61428eb18dcd6b4f9-merged.mount: Deactivated successfully.
Dec  7 05:15:28 np0005549474 podman[275024]: 2025-12-07 10:15:28.817031611 +0000 UTC m=+0.234728287 container remove 861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 05:15:28 np0005549474 systemd[1]: libpod-conmon-861a803914db953e4e8f5a7d46d966f2b4223f62acc1c84ac264ecd405f1328e.scope: Deactivated successfully.
Dec  7 05:15:29 np0005549474 podman[275065]: 2025-12-07 10:15:29.057146363 +0000 UTC m=+0.062558508 container create 449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:15:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:29.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:29 np0005549474 systemd[1]: Started libpod-conmon-449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f.scope.
Dec  7 05:15:29 np0005549474 podman[275065]: 2025-12-07 10:15:29.028118975 +0000 UTC m=+0.033531170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:15:29 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18cc7809ae24ae9868252a304af062cadc9a78c37ca0b8267f72ab3e5bc2a6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18cc7809ae24ae9868252a304af062cadc9a78c37ca0b8267f72ab3e5bc2a6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18cc7809ae24ae9868252a304af062cadc9a78c37ca0b8267f72ab3e5bc2a6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:29 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18cc7809ae24ae9868252a304af062cadc9a78c37ca0b8267f72ab3e5bc2a6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:29 np0005549474 podman[275065]: 2025-12-07 10:15:29.16362113 +0000 UTC m=+0.169033275 container init 449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:15:29 np0005549474 podman[275065]: 2025-12-07 10:15:29.1769114 +0000 UTC m=+0.182323545 container start 449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hugle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 05:15:29 np0005549474 podman[275065]: 2025-12-07 10:15:29.181550286 +0000 UTC m=+0.186962441 container attach 449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hugle, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:15:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:29 np0005549474 lvm[275156]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:15:29 np0005549474 lvm[275156]: VG ceph_vg0 finished
Dec  7 05:15:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:29] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Dec  7 05:15:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:29] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Dec  7 05:15:30 np0005549474 lvm[275160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:15:30 np0005549474 lvm[275160]: VG ceph_vg0 finished
Dec  7 05:15:30 np0005549474 festive_hugle[275081]: {}
Dec  7 05:15:30 np0005549474 systemd[1]: libpod-449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f.scope: Deactivated successfully.
Dec  7 05:15:30 np0005549474 podman[275065]: 2025-12-07 10:15:30.067610783 +0000 UTC m=+1.073022928 container died 449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  7 05:15:30 np0005549474 systemd[1]: libpod-449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f.scope: Consumed 1.514s CPU time.
Dec  7 05:15:30 np0005549474 systemd[1]: var-lib-containers-storage-overlay-e18cc7809ae24ae9868252a304af062cadc9a78c37ca0b8267f72ab3e5bc2a6e-merged.mount: Deactivated successfully.
Dec  7 05:15:30 np0005549474 podman[275065]: 2025-12-07 10:15:30.124051434 +0000 UTC m=+1.129463539 container remove 449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Dec  7 05:15:30 np0005549474 systemd[1]: libpod-conmon-449a40e922f20fe86c2baec5d2cd2562cbf72980bf490bbbc573007d79a4ca5f.scope: Deactivated successfully.
Dec  7 05:15:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:15:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:15:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:15:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.8 KiB/s wr, 30 op/s
Dec  7 05:15:30 np0005549474 nova_compute[256753]: 2025-12-07 10:15:30.758 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:31.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:31 np0005549474 nova_compute[256753]: 2025-12-07 10:15:31.158 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.8 KiB/s wr, 30 op/s
Dec  7 05:15:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:33.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:33 np0005549474 podman[275201]: 2025-12-07 10:15:33.328078458 +0000 UTC m=+0.120316613 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  7 05:15:33 np0005549474 podman[275202]: 2025-12-07 10:15:33.348506832 +0000 UTC m=+0.138638470 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 05:15:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.8 KiB/s wr, 30 op/s
Dec  7 05:15:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:35.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:35 np0005549474 nova_compute[256753]: 2025-12-07 10:15:35.792 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:36 np0005549474 nova_compute[256753]: 2025-12-07 10:15:36.116 256757 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765102521.1135519, b6ed365e-3ce3-4449-8967-f77cf4a1dd55 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:15:36 np0005549474 nova_compute[256753]: 2025-12-07 10:15:36.116 256757 INFO nova.compute.manager [-] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] VM Stopped (Lifecycle Event)#033[00m
Dec  7 05:15:36 np0005549474 nova_compute[256753]: 2025-12-07 10:15:36.137 256757 DEBUG nova.compute.manager [None req-4cbdb8b5-8d64-4ead-96eb-e2d619f43f34 - - - - - -] [instance: b6ed365e-3ce3-4449-8967-f77cf4a1dd55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:15:36 np0005549474 nova_compute[256753]: 2025-12-07 10:15:36.160 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:15:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:37.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:15:37 np0005549474 podman[275252]: 2025-12-07 10:15:37.278358913 +0000 UTC m=+0.078953082 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:15:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:37.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:38.627 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:38.628 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:38.628 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:15:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:15:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:39.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:15:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:39.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:39] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Dec  7 05:15:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:39] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.022 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.023 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.038 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.106 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.106 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.113 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.113 256757 INFO nova.compute.claims [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.216 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:40 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:15:40 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425210097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.700 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.707 256757 DEBUG nova.compute.provider_tree [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.728 256757 DEBUG nova.scheduler.client.report [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:15:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.758 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.759 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.794 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.824 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.825 256757 DEBUG nova.network.neutron [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.845 256757 INFO nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.862 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.957 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.959 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.959 256757 INFO nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Creating image(s)#033[00m
Dec  7 05:15:40 np0005549474 nova_compute[256753]: 2025-12-07 10:15:40.995 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.036 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.076 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.080 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.107 256757 DEBUG nova.policy [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:15:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:41.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.163 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.168 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.169 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.170 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.170 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.202 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.206 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b 0e659348-3b39-4619-862c-1b89d81d26b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.486 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b 0e659348-3b39-4619-862c-1b89d81d26b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.580 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] resizing rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.724 256757 DEBUG nova.objects.instance [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0e659348-3b39-4619-862c-1b89d81d26b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.750 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.750 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Ensure instance console log exists: /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.751 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.751 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.751 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:41 np0005549474 nova_compute[256753]: 2025-12-07 10:15:41.794 256757 DEBUG nova.network.neutron [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Successfully created port: 8f12df55-697f-4079-af36-c87cc2d6cff1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:15:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:41.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:15:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:15:42
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.control', '.nfs', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', '.rgw.root']
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:15:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:15:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:15:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:43.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.301 256757 DEBUG nova.network.neutron [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Successfully updated port: 8f12df55-697f-4079-af36-c87cc2d6cff1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.319 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.319 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.319 256757 DEBUG nova.network.neutron [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.425 256757 DEBUG nova.compute.manager [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.425 256757 DEBUG nova.compute.manager [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing instance network info cache due to event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.426 256757 DEBUG oslo_concurrency.lockutils [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:15:43 np0005549474 nova_compute[256753]: 2025-12-07 10:15:43.476 256757 DEBUG nova.network.neutron [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:15:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:15:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:15:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:15:44 np0005549474 nova_compute[256753]: 2025-12-07 10:15:44.917 256757 DEBUG nova.network.neutron [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:44 np0005549474 nova_compute[256753]: 2025-12-07 10:15:44.942 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:15:44 np0005549474 nova_compute[256753]: 2025-12-07 10:15:44.943 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Instance network_info: |[{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  7 05:15:44 np0005549474 nova_compute[256753]: 2025-12-07 10:15:44.944 256757 DEBUG oslo_concurrency.lockutils [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:15:44 np0005549474 nova_compute[256753]: 2025-12-07 10:15:44.944 256757 DEBUG nova.network.neutron [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:15:44 np0005549474 nova_compute[256753]: 2025-12-07 10:15:44.949 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Start _get_guest_xml network_info=[{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': 'af7b5730-2fa9-449f-8ccb-a9519582f1b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.079 256757 WARNING nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.085 256757 DEBUG nova.virt.libvirt.host [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.086 256757 DEBUG nova.virt.libvirt.host [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.088 256757 DEBUG nova.virt.libvirt.host [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.089 256757 DEBUG nova.virt.libvirt.host [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.089 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.089 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-07T10:06:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bc1a767b-c985-4370-b41e-5cb294d603d7',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.090 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.091 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.091 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.091 256757 DEBUG nova.virt.hardware [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.093 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:45.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:15:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647541742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.645 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.678 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.683 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:45 np0005549474 nova_compute[256753]: 2025-12-07 10:15:45.795 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:45.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.165 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:15:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949707003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.248 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.250 256757 DEBUG nova.virt.libvirt.vif [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1863127630',display_name='tempest-TestNetworkBasicOps-server-1863127630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1863127630',id=11,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDleUosjG6jxbfP9ThQfCusjOkxMkoL0sXnnx4Uq8QkM4LpRdNkO8Kp6H6zij9tS2guSYsf4VYz23xFdwVRLlJh3l6SMCR3OC+X8RwUSHtaO65EFq5XlWeUP9iyGj+o/Q==',key_name='tempest-TestNetworkBasicOps-28232717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-jcqd76e4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:15:40Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=0e659348-3b39-4619-862c-1b89d81d26b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.250 256757 DEBUG nova.network.os_vif_util [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.252 256757 DEBUG nova.network.os_vif_util [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.253 256757 DEBUG nova.objects.instance [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e659348-3b39-4619-862c-1b89d81d26b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.279 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] End _get_guest_xml xml=<domain type="kvm">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <uuid>0e659348-3b39-4619-862c-1b89d81d26b3</uuid>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <name>instance-0000000b</name>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <memory>131072</memory>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <vcpu>1</vcpu>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:name>tempest-TestNetworkBasicOps-server-1863127630</nova:name>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:creationTime>2025-12-07 10:15:45</nova:creationTime>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:flavor name="m1.nano">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:memory>128</nova:memory>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:disk>1</nova:disk>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:swap>0</nova:swap>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:vcpus>1</nova:vcpus>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </nova:flavor>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:owner>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </nova:owner>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <nova:ports>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <nova:port uuid="8f12df55-697f-4079-af36-c87cc2d6cff1">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        </nova:port>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </nova:ports>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </nova:instance>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <sysinfo type="smbios">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <entry name="manufacturer">RDO</entry>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <entry name="product">OpenStack Compute</entry>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <entry name="serial">0e659348-3b39-4619-862c-1b89d81d26b3</entry>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <entry name="uuid">0e659348-3b39-4619-862c-1b89d81d26b3</entry>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <entry name="family">Virtual Machine</entry>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <boot dev="hd"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <smbios mode="sysinfo"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <vmcoreinfo/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <clock offset="utc">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <timer name="pit" tickpolicy="delay"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <timer name="hpet" present="no"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <cpu mode="host-model" match="exact">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <topology sockets="1" cores="1" threads="1"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <disk type="network" device="disk">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/0e659348-3b39-4619-862c-1b89d81d26b3_disk">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <target dev="vda" bus="virtio"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <disk type="network" device="cdrom">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/0e659348-3b39-4619-862c-1b89d81d26b3_disk.config">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <target dev="sda" bus="sata"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <interface type="ethernet">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <mac address="fa:16:3e:09:3c:19"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <mtu size="1442"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <target dev="tap8f12df55-69"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <serial type="pty">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <log file="/var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/console.log" append="off"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <input type="tablet" bus="usb"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <rng model="virtio">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <backend model="random">/dev/urandom</backend>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <controller type="usb" index="0"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    <memballoon model="virtio">
Dec  7 05:15:46 np0005549474 nova_compute[256753]:      <stats period="10"/>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:15:46 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:15:46 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:15:46 np0005549474 nova_compute[256753]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.280 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Preparing to wait for external event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.281 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.281 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.281 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.281 256757 DEBUG nova.virt.libvirt.vif [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1863127630',display_name='tempest-TestNetworkBasicOps-server-1863127630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1863127630',id=11,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDleUosjG6jxbfP9ThQfCusjOkxMkoL0sXnnx4Uq8QkM4LpRdNkO8Kp6H6zij9tS2guSYsf4VYz23xFdwVRLlJh3l6SMCR3OC+X8RwUSHtaO65EFq5XlWeUP9iyGj+o/Q==',key_name='tempest-TestNetworkBasicOps-28232717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-jcqd76e4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:15:40Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=0e659348-3b39-4619-862c-1b89d81d26b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.282 256757 DEBUG nova.network.os_vif_util [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.282 256757 DEBUG nova.network.os_vif_util [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.283 256757 DEBUG os_vif [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.283 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.283 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.284 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.286 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.287 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f12df55-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.288 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f12df55-69, col_values=(('external_ids', {'iface-id': '8f12df55-697f-4079-af36-c87cc2d6cff1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:3c:19', 'vm-uuid': '0e659348-3b39-4619-862c-1b89d81d26b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.290 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:46 np0005549474 NetworkManager[49051]: <info>  [1765102546.2906] manager: (tap8f12df55-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.294 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.297 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.298 256757 INFO os_vif [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69')#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.367 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.368 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.369 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:09:3c:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.369 256757 INFO nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Using config drive#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.404 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.768 256757 INFO nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Creating config drive at /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/disk.config#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.772 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpifov8x6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.900 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpifov8x6z" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.935 256757 DEBUG nova.storage.rbd_utils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 0e659348-3b39-4619-862c-1b89d81d26b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:15:46 np0005549474 nova_compute[256753]: 2025-12-07 10:15:46.941 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/disk.config 0e659348-3b39-4619-862c-1b89d81d26b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.119 256757 DEBUG nova.network.neutron [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updated VIF entry in instance network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.121 256757 DEBUG nova.network.neutron [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:47.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.139 256757 DEBUG oslo_concurrency.processutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/disk.config 0e659348-3b39-4619-862c-1b89d81d26b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.140 256757 INFO nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Deleting local config drive /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3/disk.config because it was imported into RBD.#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.144 256757 DEBUG oslo_concurrency.lockutils [req-764ef50e-75f2-45b6-98aa-def33bdf69d9 req-088601ab-acd5-46fa-9a95-775495c0a398 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:15:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:47.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:15:47 np0005549474 kernel: tap8f12df55-69: entered promiscuous mode
Dec  7 05:15:47 np0005549474 NetworkManager[49051]: <info>  [1765102547.2052] manager: (tap8f12df55-69): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.206 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:47Z|00079|binding|INFO|Claiming lport 8f12df55-697f-4079-af36-c87cc2d6cff1 for this chassis.
Dec  7 05:15:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:47Z|00080|binding|INFO|8f12df55-697f-4079-af36-c87cc2d6cff1: Claiming fa:16:3e:09:3c:19 10.100.0.6
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.222 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:3c:19 10.100.0.6'], port_security=['fa:16:3e:09:3c:19 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e659348-3b39-4619-862c-1b89d81d26b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e97b448-8139-4111-81bb-273831e7b5f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5ab910-6849-428d-baf0-515c2f9ffc5e, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=8f12df55-697f-4079-af36-c87cc2d6cff1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.224 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 8f12df55-697f-4079-af36-c87cc2d6cff1 in datapath 3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 bound to our chassis#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.226 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6#033[00m
Dec  7 05:15:47 np0005549474 systemd-machined[217882]: New machine qemu-5-instance-0000000b.
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.240 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[422521d7-f43b-412b-81db-af6b1092db47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.241 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c3486ca-b1 in ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.243 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c3486ca-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.243 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2e97c8-bdc2-4e95-929e-88f6e6406a6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.244 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[324fd7bf-af44-4741-986e-4c52c2c03c0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.261 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[b297c47a-4869-4ac7-a6d5-98f2e75e578e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.292 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[f92ce5da-7a9e-4094-8d29-fe4d7534c75f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 systemd[1]: Started Virtual Machine qemu-5-instance-0000000b.
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.300 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:47Z|00081|binding|INFO|Setting lport 8f12df55-697f-4079-af36-c87cc2d6cff1 ovn-installed in OVS
Dec  7 05:15:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:47Z|00082|binding|INFO|Setting lport 8f12df55-697f-4079-af36-c87cc2d6cff1 up in Southbound
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.306 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 systemd-udevd[275633]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.331 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[f3946b1f-036f-46bd-86de-a7d09b93c659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.339 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[d3460add-bf9c-4132-acf0-2743cc84500c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 systemd-udevd[275636]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:15:47 np0005549474 NetworkManager[49051]: <info>  [1765102547.3444] manager: (tap3c3486ca-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec  7 05:15:47 np0005549474 NetworkManager[49051]: <info>  [1765102547.3528] device (tap8f12df55-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:15:47 np0005549474 NetworkManager[49051]: <info>  [1765102547.3707] device (tap8f12df55-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.397 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[05856105-fe9d-43b1-8ec8-d5f5bed8cbcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.400 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[0f31a6ef-6835-4dfd-a8b4-2bf7ebfe5074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 NetworkManager[49051]: <info>  [1765102547.4214] device (tap3c3486ca-b0): carrier: link connected
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.427 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee814ac-5a28-4402-aeda-b78db0433972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.448 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[03de4ed2-50f9-47f3-ae15-346c1205215f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3486ca-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:63:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451008, 'reachable_time': 18987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275662, 'error': None, 'target': 'ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.471 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[126a2cc8-f523-4225-aa45-bf0dd4ab61c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:6370'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451008, 'tstamp': 451008}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275664, 'error': None, 'target': 'ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.498 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[23c68359-5405-48a2-9f5d-810c1e6ce17d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c3486ca-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:63:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451008, 'reachable_time': 18987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275665, 'error': None, 'target': 'ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.545 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b2efd3-e012-4737-b55a-f529c8273361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.625 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[83fdfee7-fbc6-4c8a-b664-5f4412db82df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.627 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3486ca-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.627 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.628 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c3486ca-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.630 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 NetworkManager[49051]: <info>  [1765102547.6338] manager: (tap3c3486ca-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  7 05:15:47 np0005549474 kernel: tap3c3486ca-b0: entered promiscuous mode
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.636 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.637 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c3486ca-b0, col_values=(('external_ids', {'iface-id': 'ab85367c-cc27-484d-aae8-2c3faa8602a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:15:47 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:47Z|00083|binding|INFO|Releasing lport ab85367c-cc27-484d-aae8-2c3faa8602a8 from this chassis (sb_readonly=0)
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.639 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.665 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.666 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.667 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[a726a30b-25d5-481f-9d50-d292f871a8f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.668 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6.pid.haproxy
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID 3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:15:47 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:15:47.669 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'env', 'PROCESS_TAG=haproxy-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:15:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.845 256757 DEBUG nova.compute.manager [req-e6cc69a7-3f32-4d2f-9393-554740f5265f req-fdb153bd-4f0f-47cc-9588-92f0c93e6ce4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.845 256757 DEBUG oslo_concurrency.lockutils [req-e6cc69a7-3f32-4d2f-9393-554740f5265f req-fdb153bd-4f0f-47cc-9588-92f0c93e6ce4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.846 256757 DEBUG oslo_concurrency.lockutils [req-e6cc69a7-3f32-4d2f-9393-554740f5265f req-fdb153bd-4f0f-47cc-9588-92f0c93e6ce4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.847 256757 DEBUG oslo_concurrency.lockutils [req-e6cc69a7-3f32-4d2f-9393-554740f5265f req-fdb153bd-4f0f-47cc-9588-92f0c93e6ce4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.847 256757 DEBUG nova.compute.manager [req-e6cc69a7-3f32-4d2f-9393-554740f5265f req-fdb153bd-4f0f-47cc-9588-92f0c93e6ce4 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Processing event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.915 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102547.907673, 0e659348-3b39-4619-862c-1b89d81d26b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.915 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] VM Started (Lifecycle Event)#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.919 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.924 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.928 256757 INFO nova.virt.libvirt.driver [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Instance spawned successfully.#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.929 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  7 05:15:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.960 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.967 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.971 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.971 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.972 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.972 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.972 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:15:47 np0005549474 nova_compute[256753]: 2025-12-07 10:15:47.973 256757 DEBUG nova.virt.libvirt.driver [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.007 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.007 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102547.9078717, 0e659348-3b39-4619-862c-1b89d81d26b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.007 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] VM Paused (Lifecycle Event)#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.043 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.050 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102547.9221313, 0e659348-3b39-4619-862c-1b89d81d26b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.050 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] VM Resumed (Lifecycle Event)#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.056 256757 INFO nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Took 7.10 seconds to spawn the instance on the hypervisor.#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.056 256757 DEBUG nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.074 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.079 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:15:48 np0005549474 podman[275735]: 2025-12-07 10:15:48.080006468 +0000 UTC m=+0.079263870 container create 34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.114 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:15:48 np0005549474 podman[275735]: 2025-12-07 10:15:48.039345466 +0000 UTC m=+0.038602908 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:15:48 np0005549474 systemd[1]: Started libpod-conmon-34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7.scope.
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.151 256757 INFO nova.compute.manager [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Took 8.07 seconds to build instance.#033[00m
Dec  7 05:15:48 np0005549474 nova_compute[256753]: 2025-12-07 10:15:48.170 256757 DEBUG oslo_concurrency.lockutils [None req-f36647e1-93e5-4863-8eb1-ca973b392626 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:48 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:15:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0124da37d099e6f14ad9df6de6273406e834c7ad862d2f6fab147538fd4b8ad3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:15:48 np0005549474 podman[275735]: 2025-12-07 10:15:48.214441074 +0000 UTC m=+0.213698516 container init 34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:15:48 np0005549474 podman[275735]: 2025-12-07 10:15:48.219425269 +0000 UTC m=+0.218682671 container start 34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:15:48 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [NOTICE]   (275755) : New worker (275757) forked
Dec  7 05:15:48 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [NOTICE]   (275755) : Loading success.
Dec  7 05:15:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  7 05:15:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:49.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:49 np0005549474 nova_compute[256753]: 2025-12-07 10:15:49.959 256757 DEBUG nova.compute.manager [req-a2f76a3b-c44f-4ee1-a4f7-3fc9c3a245ff req-a5b6963c-11cc-4058-865f-8d22545a9974 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:49 np0005549474 nova_compute[256753]: 2025-12-07 10:15:49.959 256757 DEBUG oslo_concurrency.lockutils [req-a2f76a3b-c44f-4ee1-a4f7-3fc9c3a245ff req-a5b6963c-11cc-4058-865f-8d22545a9974 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:15:49 np0005549474 nova_compute[256753]: 2025-12-07 10:15:49.960 256757 DEBUG oslo_concurrency.lockutils [req-a2f76a3b-c44f-4ee1-a4f7-3fc9c3a245ff req-a5b6963c-11cc-4058-865f-8d22545a9974 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:15:49 np0005549474 nova_compute[256753]: 2025-12-07 10:15:49.960 256757 DEBUG oslo_concurrency.lockutils [req-a2f76a3b-c44f-4ee1-a4f7-3fc9c3a245ff req-a5b6963c-11cc-4058-865f-8d22545a9974 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:15:49 np0005549474 nova_compute[256753]: 2025-12-07 10:15:49.960 256757 DEBUG nova.compute.manager [req-a2f76a3b-c44f-4ee1-a4f7-3fc9c3a245ff req-a5b6963c-11cc-4058-865f-8d22545a9974 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:15:49 np0005549474 nova_compute[256753]: 2025-12-07 10:15:49.961 256757 WARNING nova.compute.manager [req-a2f76a3b-c44f-4ee1-a4f7-3fc9c3a245ff req-a5b6963c-11cc-4058-865f-8d22545a9974 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received unexpected event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:15:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:15:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:15:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  7 05:15:50 np0005549474 nova_compute[256753]: 2025-12-07 10:15:50.845 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:51.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:51 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:51Z|00084|binding|INFO|Releasing lport ab85367c-cc27-484d-aae8-2c3faa8602a8 from this chassis (sb_readonly=0)
Dec  7 05:15:51 np0005549474 NetworkManager[49051]: <info>  [1765102551.1424] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  7 05:15:51 np0005549474 NetworkManager[49051]: <info>  [1765102551.1434] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.144 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:51 np0005549474 ovn_controller[154296]: 2025-12-07T10:15:51Z|00085|binding|INFO|Releasing lport ab85367c-cc27-484d-aae8-2c3faa8602a8 from this chassis (sb_readonly=0)
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.201 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.290 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.560 256757 DEBUG nova.compute.manager [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.560 256757 DEBUG nova.compute.manager [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing instance network info cache due to event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.561 256757 DEBUG oslo_concurrency.lockutils [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.561 256757 DEBUG oslo_concurrency.lockutils [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:15:51 np0005549474 nova_compute[256753]: 2025-12-07 10:15:51.562 256757 DEBUG nova.network.neutron [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:15:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:52 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=infra.usagestats t=2025-12-07T10:15:52.676530375Z level=info msg="Usage stats are ready to report"
Dec  7 05:15:52 np0005549474 nova_compute[256753]: 2025-12-07 10:15:52.734 256757 DEBUG nova.network.neutron [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updated VIF entry in instance network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:15:52 np0005549474 nova_compute[256753]: 2025-12-07 10:15:52.735 256757 DEBUG nova.network.neutron [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:15:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  7 05:15:52 np0005549474 nova_compute[256753]: 2025-12-07 10:15:52.757 256757 DEBUG oslo_concurrency.lockutils [req-65d16b1a-bb43-45b2-a35d-dec3766b2024 req-83d5d202-0219-492a-8c12-62a52736d8c2 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:15:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:53.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:15:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:53.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:15:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  7 05:15:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:55.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:55 np0005549474 nova_compute[256753]: 2025-12-07 10:15:55.884 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:55.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:15:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:15:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:15:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:15:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:15:56 np0005549474 nova_compute[256753]: 2025-12-07 10:15:56.292 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:15:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  7 05:15:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:57.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:57.175Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:15:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:15:57.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:15:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:15:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:15:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:15:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:57.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 134 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  7 05:15:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:59.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:15:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:15:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:59.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:15:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:59] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  7 05:15:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:15:59] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.267782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560267827, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1225, "num_deletes": 501, "total_data_size": 1694854, "memory_usage": 1730896, "flush_reason": "Manual Compaction"}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560285140, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1564458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28394, "largest_seqno": 29618, "table_properties": {"data_size": 1559060, "index_size": 2346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15676, "raw_average_key_size": 19, "raw_value_size": 1546073, "raw_average_value_size": 1954, "num_data_blocks": 101, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102488, "oldest_key_time": 1765102488, "file_creation_time": 1765102560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17482 microseconds, and 8178 cpu microseconds.
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.285257) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1564458 bytes OK
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.285291) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.288369) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.288422) EVENT_LOG_v1 {"time_micros": 1765102560288409, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.288453) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1688226, prev total WAL file size 1688226, number of live WAL files 2.
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.289691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1527KB)], [62(16MB)]
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560289746, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18884182, "oldest_snapshot_seqno": -1}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5904 keys, 12778691 bytes, temperature: kUnknown
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560399426, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12778691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12741021, "index_size": 21816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 152623, "raw_average_key_size": 25, "raw_value_size": 12636393, "raw_average_value_size": 2140, "num_data_blocks": 874, "num_entries": 5904, "num_filter_entries": 5904, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.399832) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12778691 bytes
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.401718) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.0 rd, 116.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 16.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(20.2) write-amplify(8.2) OK, records in: 6922, records dropped: 1018 output_compression: NoCompression
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.401765) EVENT_LOG_v1 {"time_micros": 1765102560401748, "job": 34, "event": "compaction_finished", "compaction_time_micros": 109800, "compaction_time_cpu_micros": 49983, "output_level": 6, "num_output_files": 1, "total_output_size": 12778691, "num_input_records": 6922, "num_output_records": 5904, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560402542, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560409128, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.289540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.409307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.409318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.409322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.409327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:16:00 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:16:00.409331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:16:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 134 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec  7 05:16:00 np0005549474 nova_compute[256753]: 2025-12-07 10:16:00.932 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:01.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:01 np0005549474 nova_compute[256753]: 2025-12-07 10:16:01.294 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:01 np0005549474 ovn_controller[154296]: 2025-12-07T10:16:01Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:3c:19 10.100.0.6
Dec  7 05:16:01 np0005549474 ovn_controller[154296]: 2025-12-07T10:16:01Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:3c:19 10.100.0.6
Dec  7 05:16:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 134 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec  7 05:16:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:03.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:04 np0005549474 podman[275784]: 2025-12-07 10:16:04.267679661 +0000 UTC m=+0.084181415 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  7 05:16:04 np0005549474 podman[275785]: 2025-12-07 10:16:04.322934149 +0000 UTC m=+0.127492048 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  7 05:16:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Dec  7 05:16:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:05.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:05 np0005549474 nova_compute[256753]: 2025-12-07 10:16:05.938 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:16:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:16:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:06 np0005549474 nova_compute[256753]: 2025-12-07 10:16:06.296 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec  7 05:16:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:07.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:07.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:16:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:07.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:08 np0005549474 podman[275863]: 2025-12-07 10:16:08.259507799 +0000 UTC m=+0.075426336 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  7 05:16:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Dec  7 05:16:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:09.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:09.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:09] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  7 05:16:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:09] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  7 05:16:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Dec  7 05:16:10 np0005549474 nova_compute[256753]: 2025-12-07 10:16:10.974 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:11.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:11 np0005549474 nova_compute[256753]: 2025-12-07 10:16:11.297 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:11.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:16:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:16:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Dec  7 05:16:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:13.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Dec  7 05:16:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:15.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:15 np0005549474 nova_compute[256753]: 2025-12-07 10:16:15.977 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:15.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:16 np0005549474 nova_compute[256753]: 2025-12-07 10:16:16.298 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec  7 05:16:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:17.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:17.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:16:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:17.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:16:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:16:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:17.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:16:18 np0005549474 nova_compute[256753]: 2025-12-07 10:16:18.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Dec  7 05:16:18 np0005549474 ceph-mgr[74811]: [dashboard INFO request] [192.168.122.100:49714] [POST] [200] [0.001s] [4.0B] [1f198c0e-863d-4bd4-934d-7d5a7af90903] /api/prometheus_receiver
Dec  7 05:16:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:19.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:19] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  7 05:16:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:19] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec  7 05:16:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  7 05:16:20 np0005549474 nova_compute[256753]: 2025-12-07 10:16:20.979 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:21 np0005549474 nova_compute[256753]: 2025-12-07 10:16:21.300 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:21.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:22 np0005549474 nova_compute[256753]: 2025-12-07 10:16:22.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:22 np0005549474 nova_compute[256753]: 2025-12-07 10:16:22.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  7 05:16:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:23 np0005549474 nova_compute[256753]: 2025-12-07 10:16:23.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:23 np0005549474 nova_compute[256753]: 2025-12-07 10:16:23.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:23 np0005549474 nova_compute[256753]: 2025-12-07 10:16:23.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:23 np0005549474 nova_compute[256753]: 2025-12-07 10:16:23.784 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:23 np0005549474 nova_compute[256753]: 2025-12-07 10:16:23.784 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:16:23 np0005549474 nova_compute[256753]: 2025-12-07 10:16:23.785 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:16:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:23.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:16:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504565590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.262 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.346 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.347 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.520 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.521 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4389MB free_disk=59.89735412597656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.522 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.522 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.584 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Instance 0e659348-3b39-4619-862c-1b89d81d26b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.584 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.584 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.612 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:16:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.912 256757 INFO nova.compute.manager [None req-1bcda52e-e9e6-4beb-915c-efd8b0d5c79d 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Get console output#033[00m
Dec  7 05:16:24 np0005549474 nova_compute[256753]: 2025-12-07 10:16:24.920 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:16:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:16:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539489423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.056 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.060 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.085 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.116 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.117 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:25.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:25.921 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:16:25 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:25.922 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.949 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:25 np0005549474 nova_compute[256753]: 2025-12-07 10:16:25.982 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.118 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.118 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.137 256757 DEBUG nova.compute.manager [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.137 256757 DEBUG nova.compute.manager [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing instance network info cache due to event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.137 256757 DEBUG oslo_concurrency.lockutils [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.138 256757 DEBUG oslo_concurrency.lockutils [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.138 256757 DEBUG nova.network.neutron [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.302 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:16:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  7 05:16:26 np0005549474 nova_compute[256753]: 2025-12-07 10:16:26.977 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:16:27 np0005549474 nova_compute[256753]: 2025-12-07 10:16:27.126 256757 INFO nova.compute.manager [None req-ff17ca2a-bd77-4700-8a5e-b432e98a30ec 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Get console output#033[00m
Dec  7 05:16:27 np0005549474 nova_compute[256753]: 2025-12-07 10:16:27.133 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:16:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:27.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:27.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:27.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:16:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:16:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:16:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.162 256757 DEBUG nova.network.neutron [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updated VIF entry in instance network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.163 256757 DEBUG nova.network.neutron [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.182 256757 DEBUG oslo_concurrency.lockutils [req-1d06aa43-02ff-4e11-91ee-006a979dceca req-acf8bf43-994d-47a5-b346-d7edf29b01d9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.182 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.183 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.183 256757 DEBUG nova.objects.instance [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0e659348-3b39-4619-862c-1b89d81d26b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.255 256757 DEBUG nova.compute.manager [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-unplugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.255 256757 DEBUG oslo_concurrency.lockutils [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.256 256757 DEBUG oslo_concurrency.lockutils [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.256 256757 DEBUG oslo_concurrency.lockutils [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.256 256757 DEBUG nova.compute.manager [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-unplugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.257 256757 WARNING nova.compute.manager [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received unexpected event network-vif-unplugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.257 256757 DEBUG nova.compute.manager [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.257 256757 DEBUG oslo_concurrency.lockutils [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.258 256757 DEBUG oslo_concurrency.lockutils [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.258 256757 DEBUG oslo_concurrency.lockutils [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.258 256757 DEBUG nova.compute.manager [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.259 256757 WARNING nova.compute.manager [req-0fd31344-b8b1-4727-9918-ae64aef96741 req-7a4933ad-81dc-4772-abe4-804cc1bd708a ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received unexpected event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:16:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.844 256757 DEBUG nova.compute.manager [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:28.844Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:28.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.845 256757 DEBUG nova.compute.manager [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing instance network info cache due to event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.845 256757 DEBUG oslo_concurrency.lockutils [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:16:28 np0005549474 nova_compute[256753]: 2025-12-07 10:16:28.997 256757 INFO nova.compute.manager [None req-84e1a43a-e9a8-44d8-adac-8f025b48e843 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Get console output#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.003 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:16:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:29.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.706 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.722 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.723 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.723 256757 DEBUG oslo_concurrency.lockutils [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.724 256757 DEBUG nova.network.neutron [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.726 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:29 np0005549474 nova_compute[256753]: 2025-12-07 10:16:29.727 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:29] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  7 05:16:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:29] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  7 05:16:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:29.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.405 256757 DEBUG nova.compute.manager [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.406 256757 DEBUG oslo_concurrency.lockutils [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.407 256757 DEBUG oslo_concurrency.lockutils [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.407 256757 DEBUG oslo_concurrency.lockutils [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.407 256757 DEBUG nova.compute.manager [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 WARNING nova.compute.manager [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received unexpected event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 DEBUG nova.compute.manager [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 DEBUG oslo_concurrency.lockutils [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 DEBUG oslo_concurrency.lockutils [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 DEBUG oslo_concurrency.lockutils [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 DEBUG nova.compute.manager [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.408 256757 WARNING nova.compute.manager [req-e27f7a26-3aec-4658-a0ca-91e6fd96115d req-fd9bd9c3-dc5b-4c6e-9971-74ea754748dd ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received unexpected event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.721 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 13 KiB/s wr, 1 op/s
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.913 256757 DEBUG nova.network.neutron [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updated VIF entry in instance network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.913 256757 DEBUG nova.network.neutron [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.929 256757 DEBUG oslo_concurrency.lockutils [req-0e3e6276-216e-42ff-ab52-4a2ea390f9d0 req-e1d03a01-ebdf-4a42-9ac6-118c089866c1 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:16:30 np0005549474 nova_compute[256753]: 2025-12-07 10:16:30.984 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:31.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:31 np0005549474 nova_compute[256753]: 2025-12-07 10:16:31.305 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:16:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:16:31 np0005549474 nova_compute[256753]: 2025-12-07 10:16:31.748 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:16:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:32.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.096711721 +0000 UTC m=+0.049257657 container create 4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_aryabhata, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:16:32 np0005549474 systemd[1]: Started libpod-conmon-4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7.scope.
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.07418355 +0000 UTC m=+0.026729496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:16:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.194556563 +0000 UTC m=+0.147102559 container init 4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_aryabhata, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.206314863 +0000 UTC m=+0.158860759 container start 4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.209680594 +0000 UTC m=+0.162226570 container attach 4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_aryabhata, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  7 05:16:32 np0005549474 gifted_aryabhata[276165]: 167 167
Dec  7 05:16:32 np0005549474 systemd[1]: libpod-4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7.scope: Deactivated successfully.
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.215151882 +0000 UTC m=+0.167697838 container died 4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_aryabhata, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 05:16:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9f080e1eef14ffd80fa2dd559317a3ea31c235fa1eeb11d6d92934b009b43f80-merged.mount: Deactivated successfully.
Dec  7 05:16:32 np0005549474 podman[276149]: 2025-12-07 10:16:32.254782317 +0000 UTC m=+0.207328223 container remove 4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_aryabhata, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:16:32 np0005549474 systemd[1]: libpod-conmon-4900f31164c5d0efa1bcdead75fffe6f1ffac434f7c910de147cfd1b3fb20bc7.scope: Deactivated successfully.
Dec  7 05:16:32 np0005549474 podman[276188]: 2025-12-07 10:16:32.493388837 +0000 UTC m=+0.072668071 container create ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chatterjee, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:16:32 np0005549474 podman[276188]: 2025-12-07 10:16:32.462263113 +0000 UTC m=+0.041542417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:16:32 np0005549474 systemd[1]: Started libpod-conmon-ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808.scope.
Dec  7 05:16:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:16:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed849e0ffbe6970058f6fe6b51c18f337780c95d451516a35541fe766d201144/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed849e0ffbe6970058f6fe6b51c18f337780c95d451516a35541fe766d201144/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed849e0ffbe6970058f6fe6b51c18f337780c95d451516a35541fe766d201144/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed849e0ffbe6970058f6fe6b51c18f337780c95d451516a35541fe766d201144/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:32 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed849e0ffbe6970058f6fe6b51c18f337780c95d451516a35541fe766d201144/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:32 np0005549474 podman[276188]: 2025-12-07 10:16:32.604683256 +0000 UTC m=+0.183962480 container init ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 05:16:32 np0005549474 podman[276188]: 2025-12-07 10:16:32.618174892 +0000 UTC m=+0.197454096 container start ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 05:16:32 np0005549474 podman[276188]: 2025-12-07 10:16:32.621113361 +0000 UTC m=+0.200392615 container attach ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:16:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:32 np0005549474 sweet_chatterjee[276205]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:16:32 np0005549474 sweet_chatterjee[276205]: --> All data devices are unavailable
Dec  7 05:16:33 np0005549474 systemd[1]: libpod-ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808.scope: Deactivated successfully.
Dec  7 05:16:33 np0005549474 podman[276188]: 2025-12-07 10:16:33.009795612 +0000 UTC m=+0.589074826 container died ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:16:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ed849e0ffbe6970058f6fe6b51c18f337780c95d451516a35541fe766d201144-merged.mount: Deactivated successfully.
Dec  7 05:16:33 np0005549474 podman[276188]: 2025-12-07 10:16:33.064060533 +0000 UTC m=+0.643339747 container remove ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:16:33 np0005549474 systemd[1]: libpod-conmon-ab8e8825c70a62b6eefef591a6222354fe9519d16ca5d0ce00a3d58a3633d808.scope: Deactivated successfully.
Dec  7 05:16:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:33.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 34 op/s
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.823141507 +0000 UTC m=+0.066543056 container create cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 05:16:33 np0005549474 systemd[1]: Started libpod-conmon-cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c.scope.
Dec  7 05:16:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.802098076 +0000 UTC m=+0.045499635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.904447122 +0000 UTC m=+0.147848741 container init cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_noether, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.913043495 +0000 UTC m=+0.156445064 container start cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_noether, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.916907419 +0000 UTC m=+0.160309038 container attach cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_noether, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 05:16:33 np0005549474 stupefied_noether[276342]: 167 167
Dec  7 05:16:33 np0005549474 systemd[1]: libpod-cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c.scope: Deactivated successfully.
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.919779897 +0000 UTC m=+0.163181416 container died cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_noether, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:16:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1bc3ad59932a445b0ef37fe5cda451b154084704abc93c054d33680d1297c58e-merged.mount: Deactivated successfully.
Dec  7 05:16:33 np0005549474 podman[276325]: 2025-12-07 10:16:33.9659714 +0000 UTC m=+0.209372969 container remove cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:16:33 np0005549474 systemd[1]: libpod-conmon-cfcc1e3e5e9c173743a7741a2979a9dcb31a02b0b2baea3ef78580e476dc4c2c.scope: Deactivated successfully.
Dec  7 05:16:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:34.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.16916202 +0000 UTC m=+0.065272101 container create 762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:16:34 np0005549474 systemd[1]: Started libpod-conmon-762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6.scope.
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.139175727 +0000 UTC m=+0.035285878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:16:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:16:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfb4f8b7ef4a49ebf78c6c83fb1f24dd9479d6bb8acd03f505836de2883fad4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfb4f8b7ef4a49ebf78c6c83fb1f24dd9479d6bb8acd03f505836de2883fad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfb4f8b7ef4a49ebf78c6c83fb1f24dd9479d6bb8acd03f505836de2883fad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfb4f8b7ef4a49ebf78c6c83fb1f24dd9479d6bb8acd03f505836de2883fad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.271008102 +0000 UTC m=+0.167118173 container init 762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shaw, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.278558406 +0000 UTC m=+0.174668457 container start 762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shaw, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.281579608 +0000 UTC m=+0.177689699 container attach 762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.315 256757 DEBUG nova.compute.manager [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.317 256757 DEBUG nova.compute.manager [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing instance network info cache due to event network-changed-8f12df55-697f-4079-af36-c87cc2d6cff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.318 256757 DEBUG oslo_concurrency.lockutils [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.318 256757 DEBUG oslo_concurrency.lockutils [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.318 256757 DEBUG nova.network.neutron [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Refreshing network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.368 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.369 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.369 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.369 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.370 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.371 256757 INFO nova.compute.manager [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Terminating instance#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.372 256757 DEBUG nova.compute.manager [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  7 05:16:34 np0005549474 kernel: tap8f12df55-69 (unregistering): left promiscuous mode
Dec  7 05:16:34 np0005549474 NetworkManager[49051]: <info>  [1765102594.4297] device (tap8f12df55-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.438 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 ovn_controller[154296]: 2025-12-07T10:16:34Z|00086|binding|INFO|Releasing lport 8f12df55-697f-4079-af36-c87cc2d6cff1 from this chassis (sb_readonly=0)
Dec  7 05:16:34 np0005549474 ovn_controller[154296]: 2025-12-07T10:16:34Z|00087|binding|INFO|Setting lport 8f12df55-697f-4079-af36-c87cc2d6cff1 down in Southbound
Dec  7 05:16:34 np0005549474 ovn_controller[154296]: 2025-12-07T10:16:34Z|00088|binding|INFO|Removing iface tap8f12df55-69 ovn-installed in OVS
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.442 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.449 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:3c:19 10.100.0.6'], port_security=['fa:16:3e:09:3c:19 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e659348-3b39-4619-862c-1b89d81d26b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0e97b448-8139-4111-81bb-273831e7b5f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5ab910-6849-428d-baf0-515c2f9ffc5e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=8f12df55-697f-4079-af36-c87cc2d6cff1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.451 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 8f12df55-697f-4079-af36-c87cc2d6cff1 in datapath 3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 unbound from our chassis#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.454 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.456 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcf10a9-ffc3-48d1-bee6-b25929b36320]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.456 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 namespace which is not needed anymore#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.470 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  7 05:16:34 np0005549474 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Consumed 15.777s CPU time.
Dec  7 05:16:34 np0005549474 systemd-machined[217882]: Machine qemu-5-instance-0000000b terminated.
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]: {
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:    "0": [
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:        {
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "devices": [
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "/dev/loop3"
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            ],
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "lv_name": "ceph_lv0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "lv_size": "21470642176",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "name": "ceph_lv0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "tags": {
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.cluster_name": "ceph",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.crush_device_class": "",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.encrypted": "0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.osd_id": "0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.type": "block",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.vdo": "0",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:                "ceph.with_tpm": "0"
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            },
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "type": "block",
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:            "vg_name": "ceph_vg0"
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:        }
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]:    ]
Dec  7 05:16:34 np0005549474 wonderful_shaw[276383]: }
Dec  7 05:16:34 np0005549474 systemd[1]: libpod-762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6.scope: Deactivated successfully.
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.566182837 +0000 UTC m=+0.462292898 container died 762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:16:34 np0005549474 podman[276392]: 2025-12-07 10:16:34.576570638 +0000 UTC m=+0.110068245 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  7 05:16:34 np0005549474 podman[276394]: 2025-12-07 10:16:34.59024855 +0000 UTC m=+0.124463257 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  7 05:16:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6bfb4f8b7ef4a49ebf78c6c83fb1f24dd9479d6bb8acd03f505836de2883fad4-merged.mount: Deactivated successfully.
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.600 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.616 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.618 256757 INFO nova.virt.libvirt.driver [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Instance destroyed successfully.#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.619 256757 DEBUG nova.objects.instance [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'resources' on Instance uuid 0e659348-3b39-4619-862c-1b89d81d26b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:16:34 np0005549474 podman[276366]: 2025-12-07 10:16:34.630132651 +0000 UTC m=+0.526242712 container remove 762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.632 256757 DEBUG nova.virt.libvirt.vif [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1863127630',display_name='tempest-TestNetworkBasicOps-server-1863127630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1863127630',id=11,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDleUosjG6jxbfP9ThQfCusjOkxMkoL0sXnnx4Uq8QkM4LpRdNkO8Kp6H6zij9tS2guSYsf4VYz23xFdwVRLlJh3l6SMCR3OC+X8RwUSHtaO65EFq5XlWeUP9iyGj+o/Q==',key_name='tempest-TestNetworkBasicOps-28232717',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:15:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-jcqd76e4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:15:48Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=0e659348-3b39-4619-862c-1b89d81d26b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.633 256757 DEBUG nova.network.os_vif_util [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.634 256757 DEBUG nova.network.os_vif_util [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.634 256757 DEBUG os_vif [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.639 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.639 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f12df55-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.641 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.644 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.646 256757 INFO os_vif [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:3c:19,bridge_name='br-int',has_traffic_filtering=True,id=8f12df55-697f-4079-af36-c87cc2d6cff1,network=Network(3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f12df55-69')#033[00m
Dec  7 05:16:34 np0005549474 systemd[1]: libpod-conmon-762224f9542789e1f8df743b8989a9ab97b88374547dd2434baffc6b9841e0a6.scope: Deactivated successfully.
Dec  7 05:16:34 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [NOTICE]   (275755) : haproxy version is 2.8.14-c23fe91
Dec  7 05:16:34 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [NOTICE]   (275755) : path to executable is /usr/sbin/haproxy
Dec  7 05:16:34 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [WARNING]  (275755) : Exiting Master process...
Dec  7 05:16:34 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [WARNING]  (275755) : Exiting Master process...
Dec  7 05:16:34 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [ALERT]    (275755) : Current worker (275757) exited with code 143 (Terminated)
Dec  7 05:16:34 np0005549474 neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6[275751]: [WARNING]  (275755) : All workers exited. Exiting... (0)
Dec  7 05:16:34 np0005549474 systemd[1]: libpod-34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7.scope: Deactivated successfully.
Dec  7 05:16:34 np0005549474 podman[276461]: 2025-12-07 10:16:34.670032433 +0000 UTC m=+0.069220619 container died 34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.675 256757 DEBUG nova.compute.manager [req-5d7e5903-c760-4e5c-9e36-fc1650297339 req-d1ef0690-96f2-4f3e-a71b-208db2323f6c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-unplugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.676 256757 DEBUG oslo_concurrency.lockutils [req-5d7e5903-c760-4e5c-9e36-fc1650297339 req-d1ef0690-96f2-4f3e-a71b-208db2323f6c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.676 256757 DEBUG oslo_concurrency.lockutils [req-5d7e5903-c760-4e5c-9e36-fc1650297339 req-d1ef0690-96f2-4f3e-a71b-208db2323f6c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.676 256757 DEBUG oslo_concurrency.lockutils [req-5d7e5903-c760-4e5c-9e36-fc1650297339 req-d1ef0690-96f2-4f3e-a71b-208db2323f6c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.677 256757 DEBUG nova.compute.manager [req-5d7e5903-c760-4e5c-9e36-fc1650297339 req-d1ef0690-96f2-4f3e-a71b-208db2323f6c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-unplugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.677 256757 DEBUG nova.compute.manager [req-5d7e5903-c760-4e5c-9e36-fc1650297339 req-d1ef0690-96f2-4f3e-a71b-208db2323f6c ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-unplugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  7 05:16:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7-userdata-shm.mount: Deactivated successfully.
Dec  7 05:16:34 np0005549474 podman[276461]: 2025-12-07 10:16:34.718385593 +0000 UTC m=+0.117573770 container cleanup 34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:16:34 np0005549474 systemd[1]: libpod-conmon-34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7.scope: Deactivated successfully.
Dec  7 05:16:34 np0005549474 podman[276546]: 2025-12-07 10:16:34.809492355 +0000 UTC m=+0.059756872 container remove 34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.818 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[637815d8-758e-4671-8bc9-93fbdbfd6cec]: (4, ('Sun Dec  7 10:16:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 (34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7)\n34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7\nSun Dec  7 10:16:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 (34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7)\n34bbb14cd0785488c137570aace270019e017ea561cacebda03039b6ae1c00a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.821 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[827d2b79-9aa1-434d-9719-9b5dae4d6855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.822 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c3486ca-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.861 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 kernel: tap3c3486ca-b0: left promiscuous mode
Dec  7 05:16:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0124da37d099e6f14ad9df6de6273406e834c7ad862d2f6fab147538fd4b8ad3-merged.mount: Deactivated successfully.
Dec  7 05:16:34 np0005549474 nova_compute[256753]: 2025-12-07 10:16:34.876 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.878 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[a14912f6-e82e-4f16-9c88-0fc1018e3b55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.892 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc7399d-dbe1-4e54-b1bf-57845efaaa3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.893 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[faf1ab6a-78f4-4fe7-b193-49af497a0c3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.913 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0406c6c1-bcc0-48e1-9da7-57abf512f9c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450998, 'reachable_time': 36081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276592, 'error': None, 'target': 'ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.916 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:16:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:34.916 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[f66c9a7a-fc7c-4361-bd06-6f37ab63f423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:16:34 np0005549474 systemd[1]: run-netns-ovnmeta\x2d3c3486ca\x2dbfaf\x2d48a3\x2da2b7\x2d7df7cf1dd4b6.mount: Deactivated successfully.
Dec  7 05:16:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.112 256757 INFO nova.virt.libvirt.driver [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Deleting instance files /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3_del#033[00m
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.112 256757 INFO nova.virt.libvirt.driver [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Deletion of /var/lib/nova/instances/0e659348-3b39-4619-862c-1b89d81d26b3_del complete#033[00m
Dec  7 05:16:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.189 256757 INFO nova.compute.manager [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.190 256757 DEBUG oslo.service.loopingcall [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.190 256757 DEBUG nova.compute.manager [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.190 256757 DEBUG nova.network.neutron [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.329257809 +0000 UTC m=+0.045761402 container create fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 05:16:35 np0005549474 systemd[1]: Started libpod-conmon-fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63.scope.
Dec  7 05:16:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.309813763 +0000 UTC m=+0.026317386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.412514297 +0000 UTC m=+0.129017920 container init fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.41926118 +0000 UTC m=+0.135764773 container start fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.422634392 +0000 UTC m=+0.139138005 container attach fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:16:35 np0005549474 elegant_hellman[276657]: 167 167
Dec  7 05:16:35 np0005549474 systemd[1]: libpod-fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63.scope: Deactivated successfully.
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.425872729 +0000 UTC m=+0.142376322 container died fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 05:16:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2677afdfb13259bd208f715f0ed31799a380f25b66b56393328e17dc45f08923-merged.mount: Deactivated successfully.
Dec  7 05:16:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.8 KiB/s wr, 33 op/s
Dec  7 05:16:35 np0005549474 podman[276638]: 2025-12-07 10:16:35.459532002 +0000 UTC m=+0.176035625 container remove fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 05:16:35 np0005549474 systemd[1]: libpod-conmon-fb406f83868f168a8d41d1d38856ee62a5f1617e7979533b53ff1e98971f7a63.scope: Deactivated successfully.
Dec  7 05:16:35 np0005549474 podman[276681]: 2025-12-07 10:16:35.664651284 +0000 UTC m=+0.060712376 container create dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_borg, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:16:35 np0005549474 podman[276681]: 2025-12-07 10:16:35.633720426 +0000 UTC m=+0.029781578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:16:35 np0005549474 systemd[1]: Started libpod-conmon-dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf.scope.
Dec  7 05:16:35 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:16:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c07ecf14f445b7cfbd13373c464fda71a72d1e74408914e1d154bb9858b195/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c07ecf14f445b7cfbd13373c464fda71a72d1e74408914e1d154bb9858b195/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c07ecf14f445b7cfbd13373c464fda71a72d1e74408914e1d154bb9858b195/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:35 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c07ecf14f445b7cfbd13373c464fda71a72d1e74408914e1d154bb9858b195/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:16:35 np0005549474 podman[276681]: 2025-12-07 10:16:35.798666979 +0000 UTC m=+0.194728121 container init dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_borg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:16:35 np0005549474 podman[276681]: 2025-12-07 10:16:35.811520958 +0000 UTC m=+0.207582020 container start dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_borg, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:16:35 np0005549474 podman[276681]: 2025-12-07 10:16:35.81531818 +0000 UTC m=+0.211379282 container attach dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_borg, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:16:35 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:35.924 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.982 256757 DEBUG nova.network.neutron [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updated VIF entry in instance network info cache for port 8f12df55-697f-4079-af36-c87cc2d6cff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:16:35 np0005549474 nova_compute[256753]: 2025-12-07 10:16:35.983 256757 DEBUG nova.network.neutron [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [{"id": "8f12df55-697f-4079-af36-c87cc2d6cff1", "address": "fa:16:3e:09:3c:19", "network": {"id": "3c3486ca-bfaf-48a3-a2b7-7df7cf1dd4b6", "bridge": "br-int", "label": "tempest-network-smoke--216795177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f12df55-69", "ovs_interfaceid": "8f12df55-697f-4079-af36-c87cc2d6cff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.002 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:36.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.012 256757 DEBUG oslo_concurrency.lockutils [req-8485cadc-a5b3-4b7b-9c16-e24a3e8f2f1d req-5fbc4936-9dc7-4be1-8c21-47fbae1e6156 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-0e659348-3b39-4619-862c-1b89d81d26b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.025 256757 DEBUG nova.network.neutron [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.049 256757 INFO nova.compute.manager [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Took 0.86 seconds to deallocate network for instance.#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.100 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.101 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.177 256757 DEBUG oslo_concurrency.processutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:16:36 np0005549474 lvm[276790]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:16:36 np0005549474 lvm[276790]: VG ceph_vg0 finished
Dec  7 05:16:36 np0005549474 gracious_borg[276697]: {}
Dec  7 05:16:36 np0005549474 systemd[1]: libpod-dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf.scope: Deactivated successfully.
Dec  7 05:16:36 np0005549474 podman[276681]: 2025-12-07 10:16:36.610774071 +0000 UTC m=+1.006835133 container died dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:16:36 np0005549474 systemd[1]: libpod-dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf.scope: Consumed 1.369s CPU time.
Dec  7 05:16:36 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a1c07ecf14f445b7cfbd13373c464fda71a72d1e74408914e1d154bb9858b195-merged.mount: Deactivated successfully.
Dec  7 05:16:36 np0005549474 podman[276681]: 2025-12-07 10:16:36.644423314 +0000 UTC m=+1.040484376 container remove dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_borg, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 05:16:36 np0005549474 systemd[1]: libpod-conmon-dfd756c22ae7e552d8c76e443105d75b13e9ec23b990c115b74c6709975ddeaf.scope: Deactivated successfully.
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.672 256757 DEBUG oslo_concurrency.processutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.679 256757 DEBUG nova.compute.provider_tree [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:16:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.695 256757 DEBUG nova.scheduler.client.report [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:16:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:16:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.717 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.741 256757 DEBUG nova.compute.manager [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.742 256757 DEBUG oslo_concurrency.lockutils [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.742 256757 DEBUG oslo_concurrency.lockutils [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.742 256757 DEBUG oslo_concurrency.lockutils [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.743 256757 DEBUG nova.compute.manager [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] No waiting events found dispatching network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.743 256757 WARNING nova.compute.manager [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received unexpected event network-vif-plugged-8f12df55-697f-4079-af36-c87cc2d6cff1 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.743 256757 DEBUG nova.compute.manager [req-e50dbda7-0093-4f43-92af-4fcf9065bd9d req-7c9d7773-37c8-435f-85ec-3360ecea0843 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Received event network-vif-deleted-8f12df55-697f-4079-af36-c87cc2d6cff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.761 256757 INFO nova.scheduler.client.report [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Deleted allocations for instance 0e659348-3b39-4619-862c-1b89d81d26b3#033[00m
Dec  7 05:16:36 np0005549474 nova_compute[256753]: 2025-12-07 10:16:36.842 256757 DEBUG oslo_concurrency.lockutils [None req-fc75bd4f-1ecf-4c1a-9b30-f014fc4f0a94 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "0e659348-3b39-4619-862c-1b89d81d26b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:37.180Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:37.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:16:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:37.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.8 KiB/s wr, 33 op/s
Dec  7 05:16:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:16:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:38.628 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:38.629 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:16:38.629 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:38.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:16:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:39 np0005549474 podman[276835]: 2025-12-07 10:16:39.286532091 +0000 UTC m=+0.087988077 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:16:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 10 KiB/s wr, 64 op/s
Dec  7 05:16:39 np0005549474 nova_compute[256753]: 2025-12-07 10:16:39.643 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:39 np0005549474 nova_compute[256753]: 2025-12-07 10:16:39.830 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:39 np0005549474 nova_compute[256753]: 2025-12-07 10:16:39.949 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:39] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  7 05:16:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:39] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec  7 05:16:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:41 np0005549474 nova_compute[256753]: 2025-12-07 10:16:41.055 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:41.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 10 KiB/s wr, 64 op/s
Dec  7 05:16:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:42.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:16:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:16:42
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', '.nfs', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', 'images', 'cephfs.cephfs.data']
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:16:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:16:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:16:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:43.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Dec  7 05:16:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:44 np0005549474 nova_compute[256753]: 2025-12-07 10:16:44.647 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:45.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  7 05:16:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:46.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:46 np0005549474 nova_compute[256753]: 2025-12-07 10:16:46.094 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:47.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:47.182Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:16:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:47.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:16:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  7 05:16:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:48.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:48.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:49.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec  7 05:16:49 np0005549474 nova_compute[256753]: 2025-12-07 10:16:49.611 256757 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765102594.6094434, 0e659348-3b39-4619-862c-1b89d81d26b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:16:49 np0005549474 nova_compute[256753]: 2025-12-07 10:16:49.612 256757 INFO nova.compute.manager [-] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] VM Stopped (Lifecycle Event)#033[00m
Dec  7 05:16:49 np0005549474 nova_compute[256753]: 2025-12-07 10:16:49.637 256757 DEBUG nova.compute.manager [None req-11a3e854-f70b-4014-96df-9101a62ef9ad - - - - - -] [instance: 0e659348-3b39-4619-862c-1b89d81d26b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:16:49 np0005549474 nova_compute[256753]: 2025-12-07 10:16:49.651 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  7 05:16:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  7 05:16:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:49 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:49 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:49 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:51 np0005549474 nova_compute[256753]: 2025-12-07 10:16:51.107 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:16:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:16:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  7 05:16:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  7 05:16:54 np0005549474 nova_compute[256753]: 2025-12-07 10:16:54.701 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:16:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:16:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:16:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:16:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:55.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.398 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.399 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.434 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  7 05:16:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.545 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.545 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.553 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.553 256757 INFO nova.compute.claims [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  7 05:16:55 np0005549474 nova_compute[256753]: 2025-12-07 10:16:55.688 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:16:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:16:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:16:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:16:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556038211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.100 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.107 256757 DEBUG nova.compute.provider_tree [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.128 256757 DEBUG nova.scheduler.client.report [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.142 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.153 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.154 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.220 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.220 256757 DEBUG nova.network.neutron [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.242 256757 INFO nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.257 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.350 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.352 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.353 256757 INFO nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Creating image(s)#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.393 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.437 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.481 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.487 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.574 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.575 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.576 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.577 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "c2abdbc7095ab4b54534ae7106492229fa86ab0b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.605 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.611 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.731 256757 DEBUG nova.policy [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f27cf20bf8c4429aa12589418a57e41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ad61a97ffab4252be3eafb028b560c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.870 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c2abdbc7095ab4b54534ae7106492229fa86ab0b 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:16:56 np0005549474 nova_compute[256753]: 2025-12-07 10:16:56.952 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] resizing rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.091 256757 DEBUG nova.objects.instance [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3fa72663-9aaa-4e36-92ba-35bec3874b64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.111 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.112 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Ensure instance console log exists: /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.112 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.113 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.113 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:16:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:57.182Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:16:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:57.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:57.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:16:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:16:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:16:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:16:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:16:57 np0005549474 nova_compute[256753]: 2025-12-07 10:16:57.914 256757 DEBUG nova.network.neutron [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Successfully created port: 9223ee94-eb58-4566-a91c-7a7f60d59c18 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  7 05:16:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:16:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:58.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:16:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:16:58.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:16:58 np0005549474 nova_compute[256753]: 2025-12-07 10:16:58.987 256757 DEBUG nova.network.neutron [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Successfully updated port: 9223ee94-eb58-4566-a91c-7a7f60d59c18 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.003 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.004 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquired lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.004 256757 DEBUG nova.network.neutron [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.078 256757 DEBUG nova.compute.manager [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-changed-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.079 256757 DEBUG nova.compute.manager [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Refreshing instance network info cache due to event network-changed-9223ee94-eb58-4566-a91c-7a7f60d59c18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.079 256757 DEBUG oslo_concurrency.lockutils [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.166 256757 DEBUG nova.network.neutron [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  7 05:16:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:16:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:16:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:59.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:16:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:16:59 np0005549474 nova_compute[256753]: 2025-12-07 10:16:59.761 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:16:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:16:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:16:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:17:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:16:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:00.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.050 256757 DEBUG nova.network.neutron [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updating instance_info_cache with network_info: [{"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.072 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Releasing lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.072 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Instance network_info: |[{"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.073 256757 DEBUG oslo_concurrency.lockutils [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.073 256757 DEBUG nova.network.neutron [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Refreshing network info cache for port 9223ee94-eb58-4566-a91c-7a7f60d59c18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.078 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Start _get_guest_xml network_info=[{"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': 'af7b5730-2fa9-449f-8ccb-a9519582f1b2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.086 256757 WARNING nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.092 256757 DEBUG nova.virt.libvirt.host [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.092 256757 DEBUG nova.virt.libvirt.host [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.103 256757 DEBUG nova.virt.libvirt.host [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.104 256757 DEBUG nova.virt.libvirt.host [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.104 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.105 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-07T10:06:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bc1a767b-c985-4370-b41e-5cb294d603d7',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-07T10:06:31Z,direct_url=<?>,disk_format='qcow2',id=af7b5730-2fa9-449f-8ccb-a9519582f1b2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f2774f82d095448bbb688700083cf81d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-07T10:06:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.106 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.106 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.106 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.107 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.107 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.107 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.108 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.108 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.108 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.109 256757 DEBUG nova.virt.hardware [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.114 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:17:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224249925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.614 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.647 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:17:00 np0005549474 nova_compute[256753]: 2025-12-07 10:17:00.652 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 05:17:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1283011853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.086 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.089 256757 DEBUG nova.virt.libvirt.vif [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:16:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-557373625',display_name='tempest-TestNetworkBasicOps-server-557373625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-557373625',id=13,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNXNyWHj9S6lfXbeCvFgX7QQJufvo8qCJ9LG+J3+6BnRPzwyfimivnl8uswjid/75y6t8/fISiJqp8oI0Vd5NrT9xYGY23o63Vh0qqJI7sxx0apM6VnViNbUjvOZAim9Zg==',key_name='tempest-TestNetworkBasicOps-1527897413',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-3ml0k7x5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:16:56Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=3fa72663-9aaa-4e36-92ba-35bec3874b64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.090 256757 DEBUG nova.network.os_vif_util [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.091 256757 DEBUG nova.network.os_vif_util [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.093 256757 DEBUG nova.objects.instance [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3fa72663-9aaa-4e36-92ba-35bec3874b64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.122 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] End _get_guest_xml xml=<domain type="kvm">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <uuid>3fa72663-9aaa-4e36-92ba-35bec3874b64</uuid>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <name>instance-0000000d</name>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <memory>131072</memory>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <vcpu>1</vcpu>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <metadata>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:name>tempest-TestNetworkBasicOps-server-557373625</nova:name>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:creationTime>2025-12-07 10:17:00</nova:creationTime>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:flavor name="m1.nano">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:memory>128</nova:memory>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:disk>1</nova:disk>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:swap>0</nova:swap>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:ephemeral>0</nova:ephemeral>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:vcpus>1</nova:vcpus>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </nova:flavor>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:owner>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:user uuid="8f27cf20bf8c4429aa12589418a57e41">tempest-TestNetworkBasicOps-1175680372-project-member</nova:user>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:project uuid="2ad61a97ffab4252be3eafb028b560c1">tempest-TestNetworkBasicOps-1175680372</nova:project>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </nova:owner>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:root type="image" uuid="af7b5730-2fa9-449f-8ccb-a9519582f1b2"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <nova:ports>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <nova:port uuid="9223ee94-eb58-4566-a91c-7a7f60d59c18">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        </nova:port>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </nova:ports>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </nova:instance>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </metadata>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <sysinfo type="smbios">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <system>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <entry name="manufacturer">RDO</entry>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <entry name="product">OpenStack Compute</entry>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <entry name="serial">3fa72663-9aaa-4e36-92ba-35bec3874b64</entry>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <entry name="uuid">3fa72663-9aaa-4e36-92ba-35bec3874b64</entry>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <entry name="family">Virtual Machine</entry>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </system>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </sysinfo>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <os>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <boot dev="hd"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <smbios mode="sysinfo"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </os>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <features>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <acpi/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <apic/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <vmcoreinfo/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </features>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <clock offset="utc">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <timer name="pit" tickpolicy="delay"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <timer name="hpet" present="no"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </clock>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <cpu mode="host-model" match="exact">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <topology sockets="1" cores="1" threads="1"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </cpu>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  <devices>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <disk type="network" device="disk">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/3fa72663-9aaa-4e36-92ba-35bec3874b64_disk">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <target dev="vda" bus="virtio"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <disk type="network" device="cdrom">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <driver type="raw" cache="none"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <source protocol="rbd" name="vms/3fa72663-9aaa-4e36-92ba-35bec3874b64_disk.config">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <host name="192.168.122.100" port="6789"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <host name="192.168.122.102" port="6789"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <host name="192.168.122.101" port="6789"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </source>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <auth username="openstack">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:        <secret type="ceph" uuid="75f4c9fd-539a-5e17-b55a-0a12a4e2736c"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      </auth>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <target dev="sda" bus="sata"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </disk>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <interface type="ethernet">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <mac address="fa:16:3e:cd:b8:27"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <driver name="vhost" rx_queue_size="512"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <mtu size="1442"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <target dev="tap9223ee94-eb"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </interface>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <serial type="pty">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <log file="/var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/console.log" append="off"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </serial>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <video>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <model type="virtio"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </video>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <input type="tablet" bus="usb"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <rng model="virtio">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <backend model="random">/dev/urandom</backend>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </rng>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="pci" model="pcie-root-port"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <controller type="usb" index="0"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    <memballoon model="virtio">
Dec  7 05:17:01 np0005549474 nova_compute[256753]:      <stats period="10"/>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:    </memballoon>
Dec  7 05:17:01 np0005549474 nova_compute[256753]:  </devices>
Dec  7 05:17:01 np0005549474 nova_compute[256753]: </domain>
Dec  7 05:17:01 np0005549474 nova_compute[256753]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.124 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Preparing to wait for external event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.125 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.125 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.126 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.127 256757 DEBUG nova.virt.libvirt.vif [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-07T10:16:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-557373625',display_name='tempest-TestNetworkBasicOps-server-557373625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-557373625',id=13,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNXNyWHj9S6lfXbeCvFgX7QQJufvo8qCJ9LG+J3+6BnRPzwyfimivnl8uswjid/75y6t8/fISiJqp8oI0Vd5NrT9xYGY23o63Vh0qqJI7sxx0apM6VnViNbUjvOZAim9Zg==',key_name='tempest-TestNetworkBasicOps-1527897413',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-3ml0k7x5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-07T10:16:56Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=3fa72663-9aaa-4e36-92ba-35bec3874b64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.128 256757 DEBUG nova.network.os_vif_util [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.129 256757 DEBUG nova.network.os_vif_util [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.130 256757 DEBUG os_vif [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.131 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.132 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.133 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.134 256757 DEBUG nova.network.neutron [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updated VIF entry in instance network info cache for port 9223ee94-eb58-4566-a91c-7a7f60d59c18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.135 256757 DEBUG nova.network.neutron [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updating instance_info_cache with network_info: [{"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.141 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.141 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9223ee94-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.142 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9223ee94-eb, col_values=(('external_ids', {'iface-id': '9223ee94-eb58-4566-a91c-7a7f60d59c18', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:b8:27', 'vm-uuid': '3fa72663-9aaa-4e36-92ba-35bec3874b64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.154 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:01 np0005549474 NetworkManager[49051]: <info>  [1765102621.1555] manager: (tap9223ee94-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.158 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.161 256757 DEBUG oslo_concurrency.lockutils [req-8b29d7f6-da3b-4103-ba75-b6a8c70cebab req-b866f777-339e-4b22-977d-eb45db907e14 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.165 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.166 256757 INFO os_vif [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb')#033[00m
Dec  7 05:17:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:01.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.227 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.228 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.229 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] No VIF found with MAC fa:16:3e:cd:b8:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.229 256757 INFO nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Using config drive#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.267 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:17:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.590 256757 INFO nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Creating config drive at /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/disk.config#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.600 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1r1jksr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.729 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1r1jksr" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.779 256757 DEBUG nova.storage.rbd_utils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] rbd image 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.784 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/disk.config 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.982 256757 DEBUG oslo_concurrency.processutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/disk.config 3fa72663-9aaa-4e36-92ba-35bec3874b64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:01 np0005549474 nova_compute[256753]: 2025-12-07 10:17:01.983 256757 INFO nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Deleting local config drive /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64/disk.config because it was imported into RBD.#033[00m
Dec  7 05:17:02 np0005549474 kernel: tap9223ee94-eb: entered promiscuous mode
Dec  7 05:17:02 np0005549474 NetworkManager[49051]: <info>  [1765102622.0381] manager: (tap9223ee94-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Dec  7 05:17:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:02Z|00089|binding|INFO|Claiming lport 9223ee94-eb58-4566-a91c-7a7f60d59c18 for this chassis.
Dec  7 05:17:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:02Z|00090|binding|INFO|9223ee94-eb58-4566-a91c-7a7f60d59c18: Claiming fa:16:3e:cd:b8:27 10.100.0.12
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.039 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.046 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:02.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.054 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:b8:27 10.100.0.12'], port_security=['fa:16:3e:cd:b8:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3fa72663-9aaa-4e36-92ba-35bec3874b64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a093ddbd-a138-4cc9-8070-9676e9871fad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7881e6a0-531f-4397-bcfe-5415bb4b005e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1cb4825b-a70b-4cd9-948e-c2b1ee07b432, chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=9223ee94-eb58-4566-a91c-7a7f60d59c18) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.058 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 9223ee94-eb58-4566-a91c-7a7f60d59c18 in datapath a093ddbd-a138-4cc9-8070-9676e9871fad bound to our chassis#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.061 164143 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a093ddbd-a138-4cc9-8070-9676e9871fad#033[00m
Dec  7 05:17:02 np0005549474 systemd-udevd[277230]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.077 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[9941628a-0b61-41ca-b015-c9c762fa4c4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.078 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa093ddbd-a1 in ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.080 262215 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa093ddbd-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.080 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7eb3b5-45ef-4dab-bd61-57a7040436f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.082 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[c937e3b4-a619-491f-948e-bc01dc434256]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 systemd-machined[217882]: New machine qemu-6-instance-0000000d.
Dec  7 05:17:02 np0005549474 NetworkManager[49051]: <info>  [1765102622.0917] device (tap9223ee94-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 05:17:02 np0005549474 NetworkManager[49051]: <info>  [1765102622.0934] device (tap9223ee94-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.096 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[0a26727c-4225-4d14-ba76-614936db6ae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.120 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[abb7b20e-38ca-4998-9c96-a7d5487de1cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.125 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:02Z|00091|binding|INFO|Setting lport 9223ee94-eb58-4566-a91c-7a7f60d59c18 ovn-installed in OVS
Dec  7 05:17:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:02Z|00092|binding|INFO|Setting lport 9223ee94-eb58-4566-a91c-7a7f60d59c18 up in Southbound
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.132 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.145 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[adf52d08-6df2-426d-a4c8-c0b709b22417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.149 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[613a1cf1-4691-468e-a69e-ce04c93aac7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 NetworkManager[49051]: <info>  [1765102622.1507] manager: (tapa093ddbd-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.189 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f0b75a-b069-4903-8e94-295039cfc74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.192 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc89aa1-6ecd-4c10-9bb4-54a3313cc928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 NetworkManager[49051]: <info>  [1765102622.2152] device (tapa093ddbd-a0): carrier: link connected
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.221 262259 DEBUG oslo.privsep.daemon [-] privsep: reply[c51674ed-db1b-4916-a4e7-bde416f651aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.239 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[e89b7e2f-a21e-4e7e-a513-987304f49e2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa093ddbd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ac:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458487, 'reachable_time': 25007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277263, 'error': None, 'target': 'ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.255 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[01f4c36d-ed7b-447f-b7b6-0ba5cbc64588]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:ace2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458487, 'tstamp': 458487}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277264, 'error': None, 'target': 'ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.284 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[ef28404f-286c-41f1-b93c-3bff74ee5cda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa093ddbd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ac:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458487, 'reachable_time': 25007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277266, 'error': None, 'target': 'ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.315 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[294d0e98-1295-4d05-aa3e-fda5dfccd438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.368 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[14cefc3b-02a5-4f81-bd3f-017c34e6e3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.369 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa093ddbd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.370 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.371 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa093ddbd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.415 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 kernel: tapa093ddbd-a0: entered promiscuous mode
Dec  7 05:17:02 np0005549474 NetworkManager[49051]: <info>  [1765102622.4183] manager: (tapa093ddbd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.422 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa093ddbd-a0, col_values=(('external_ids', {'iface-id': '5b642542-2571-46d3-9207-cb94992e16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.423 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:02Z|00093|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.424 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.425 164143 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a093ddbd-a138-4cc9-8070-9676e9871fad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a093ddbd-a138-4cc9-8070-9676e9871fad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.426 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[20b2283b-8a10-484e-8855-366b77e1ba1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.427 164143 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: global
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    log         /dev/log local0 debug
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    log-tag     haproxy-metadata-proxy-a093ddbd-a138-4cc9-8070-9676e9871fad
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    user        root
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    group       root
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    maxconn     1024
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    pidfile     /var/lib/neutron/external/pids/a093ddbd-a138-4cc9-8070-9676e9871fad.pid.haproxy
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    daemon
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: defaults
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    log global
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    mode http
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    option httplog
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    option dontlognull
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    option http-server-close
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    option forwardfor
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    retries                 3
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    timeout http-request    30s
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    timeout connect         30s
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    timeout client          32s
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    timeout server          32s
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    timeout http-keep-alive 30s
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: listen listener
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    bind 169.254.169.254:80
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    server metadata /var/lib/neutron/metadata_proxy
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]:    http-request add-header X-OVN-Network-ID a093ddbd-a138-4cc9-8070-9676e9871fad
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  7 05:17:02 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:02.428 164143 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad', 'env', 'PROCESS_TAG=haproxy-a093ddbd-a138-4cc9-8070-9676e9871fad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a093ddbd-a138-4cc9-8070-9676e9871fad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.437 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.694 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102622.693339, 3fa72663-9aaa-4e36-92ba-35bec3874b64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.694 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] VM Started (Lifecycle Event)#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.720 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.724 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102622.6935732, 3fa72663-9aaa-4e36-92ba-35bec3874b64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.724 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] VM Paused (Lifecycle Event)#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.750 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.752 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.773 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:17:02 np0005549474 podman[277339]: 2025-12-07 10:17:02.809995851 +0000 UTC m=+0.047438627 container create 2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Dec  7 05:17:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:02 np0005549474 systemd[1]: Started libpod-conmon-2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804.scope.
Dec  7 05:17:02 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:02 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a087f8f8f9478edd45fd0a318efa0f05d5871ab192bf12c0fef749a5deec0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:02 np0005549474 podman[277339]: 2025-12-07 10:17:02.787144551 +0000 UTC m=+0.024587357 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec  7 05:17:02 np0005549474 podman[277339]: 2025-12-07 10:17:02.888668754 +0000 UTC m=+0.126111550 container init 2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  7 05:17:02 np0005549474 podman[277339]: 2025-12-07 10:17:02.903068225 +0000 UTC m=+0.140510991 container start 2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  7 05:17:02 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [NOTICE]   (277360) : New worker (277362) forked
Dec  7 05:17:02 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [NOTICE]   (277360) : Loading success.
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.945 256757 DEBUG nova.compute.manager [req-0ece2630-b893-4019-a252-ee3e439e08a1 req-592c83de-8899-402c-b06f-f611f537c528 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.946 256757 DEBUG oslo_concurrency.lockutils [req-0ece2630-b893-4019-a252-ee3e439e08a1 req-592c83de-8899-402c-b06f-f611f537c528 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.946 256757 DEBUG oslo_concurrency.lockutils [req-0ece2630-b893-4019-a252-ee3e439e08a1 req-592c83de-8899-402c-b06f-f611f537c528 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.947 256757 DEBUG oslo_concurrency.lockutils [req-0ece2630-b893-4019-a252-ee3e439e08a1 req-592c83de-8899-402c-b06f-f611f537c528 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.947 256757 DEBUG nova.compute.manager [req-0ece2630-b893-4019-a252-ee3e439e08a1 req-592c83de-8899-402c-b06f-f611f537c528 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Processing event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.948 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.951 256757 DEBUG nova.virt.driver [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] Emitting event <LifecycleEvent: 1765102622.9515915, 3fa72663-9aaa-4e36-92ba-35bec3874b64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.952 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] VM Resumed (Lifecycle Event)#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.953 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.956 256757 INFO nova.virt.libvirt.driver [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Instance spawned successfully.#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.957 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.980 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.986 256757 DEBUG nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.991 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.991 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.992 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.992 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.993 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:17:02 np0005549474 nova_compute[256753]: 2025-12-07 10:17:02.993 256757 DEBUG nova.virt.libvirt.driver [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  7 05:17:03 np0005549474 nova_compute[256753]: 2025-12-07 10:17:03.022 256757 INFO nova.compute.manager [None req-e47ec403-f497-42ec-bb23-cb4e4c3bedbe - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  7 05:17:03 np0005549474 nova_compute[256753]: 2025-12-07 10:17:03.061 256757 INFO nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Took 6.71 seconds to spawn the instance on the hypervisor.#033[00m
Dec  7 05:17:03 np0005549474 nova_compute[256753]: 2025-12-07 10:17:03.062 256757 DEBUG nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:17:03 np0005549474 nova_compute[256753]: 2025-12-07 10:17:03.130 256757 INFO nova.compute.manager [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Took 7.63 seconds to build instance.#033[00m
Dec  7 05:17:03 np0005549474 nova_compute[256753]: 2025-12-07 10:17:03.145 256757 DEBUG oslo_concurrency.lockutils [None req-4867341a-2165-4bda-97fc-bded4e9c99d2 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  7 05:17:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:04.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:05 np0005549474 nova_compute[256753]: 2025-12-07 10:17:05.193 256757 DEBUG nova.compute.manager [req-4fb5fefb-2678-484e-8f79-2dbc3c7d85ca req-017be1f2-1bf2-480a-84b6-5651348ae6f5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:05 np0005549474 nova_compute[256753]: 2025-12-07 10:17:05.193 256757 DEBUG oslo_concurrency.lockutils [req-4fb5fefb-2678-484e-8f79-2dbc3c7d85ca req-017be1f2-1bf2-480a-84b6-5651348ae6f5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:05 np0005549474 nova_compute[256753]: 2025-12-07 10:17:05.193 256757 DEBUG oslo_concurrency.lockutils [req-4fb5fefb-2678-484e-8f79-2dbc3c7d85ca req-017be1f2-1bf2-480a-84b6-5651348ae6f5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:05 np0005549474 nova_compute[256753]: 2025-12-07 10:17:05.194 256757 DEBUG oslo_concurrency.lockutils [req-4fb5fefb-2678-484e-8f79-2dbc3c7d85ca req-017be1f2-1bf2-480a-84b6-5651348ae6f5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:05 np0005549474 nova_compute[256753]: 2025-12-07 10:17:05.194 256757 DEBUG nova.compute.manager [req-4fb5fefb-2678-484e-8f79-2dbc3c7d85ca req-017be1f2-1bf2-480a-84b6-5651348ae6f5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] No waiting events found dispatching network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:17:05 np0005549474 nova_compute[256753]: 2025-12-07 10:17:05.194 256757 WARNING nova.compute.manager [req-4fb5fefb-2678-484e-8f79-2dbc3c7d85ca req-017be1f2-1bf2-480a-84b6-5651348ae6f5 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received unexpected event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 for instance with vm_state active and task_state None.#033[00m
Dec  7 05:17:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:05 np0005549474 podman[277374]: 2025-12-07 10:17:05.291610777 +0000 UTC m=+0.097425513 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  7 05:17:05 np0005549474 podman[277375]: 2025-12-07 10:17:05.332983699 +0000 UTC m=+0.136822292 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  7 05:17:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  7 05:17:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:06.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:06 np0005549474 nova_compute[256753]: 2025-12-07 10:17:06.190 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:07.184Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:17:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:07.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  7 05:17:07 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:07Z|00094|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:07 np0005549474 NetworkManager[49051]: <info>  [1765102627.7115] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec  7 05:17:07 np0005549474 nova_compute[256753]: 2025-12-07 10:17:07.711 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:07 np0005549474 NetworkManager[49051]: <info>  [1765102627.7136] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  7 05:17:07 np0005549474 nova_compute[256753]: 2025-12-07 10:17:07.778 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:07 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:07Z|00095|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:07 np0005549474 nova_compute[256753]: 2025-12-07 10:17:07.787 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:08 np0005549474 nova_compute[256753]: 2025-12-07 10:17:08.023 256757 DEBUG nova.compute.manager [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-changed-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:08 np0005549474 nova_compute[256753]: 2025-12-07 10:17:08.024 256757 DEBUG nova.compute.manager [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Refreshing instance network info cache due to event network-changed-9223ee94-eb58-4566-a91c-7a7f60d59c18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:17:08 np0005549474 nova_compute[256753]: 2025-12-07 10:17:08.024 256757 DEBUG oslo_concurrency.lockutils [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:17:08 np0005549474 nova_compute[256753]: 2025-12-07 10:17:08.025 256757 DEBUG oslo_concurrency.lockutils [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:17:08 np0005549474 nova_compute[256753]: 2025-12-07 10:17:08.026 256757 DEBUG nova.network.neutron [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Refreshing network info cache for port 9223ee94-eb58-4566-a91c-7a7f60d59c18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:17:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:08.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:08.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:09.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:09 np0005549474 nova_compute[256753]: 2025-12-07 10:17:09.390 256757 DEBUG nova.network.neutron [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updated VIF entry in instance network info cache for port 9223ee94-eb58-4566-a91c-7a7f60d59c18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  7 05:17:09 np0005549474 nova_compute[256753]: 2025-12-07 10:17:09.391 256757 DEBUG nova.network.neutron [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updating instance_info_cache with network_info: [{"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:17:09 np0005549474 nova_compute[256753]: 2025-12-07 10:17:09.414 256757 DEBUG oslo_concurrency.lockutils [req-73c3a2bb-40b9-4889-a7ec-d295673eac46 req-de3b506c-9810-4b16-9c60-d8bf29ecce8f ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:17:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  7 05:17:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:17:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:17:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:10.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:10 np0005549474 podman[277450]: 2025-12-07 10:17:10.265159957 +0000 UTC m=+0.068240051 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  7 05:17:11 np0005549474 nova_compute[256753]: 2025-12-07 10:17:11.204 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  7 05:17:11 np0005549474 nova_compute[256753]: 2025-12-07 10:17:11.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:11 np0005549474 nova_compute[256753]: 2025-12-07 10:17:11.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  7 05:17:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:12.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:17:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:17:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:17:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:17:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:17:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:17:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:17:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:17:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec  7 05:17:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:14.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:14 np0005549474 nova_compute[256753]: 2025-12-07 10:17:14.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec  7 05:17:15 np0005549474 nova_compute[256753]: 2025-12-07 10:17:15.773 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:15 np0005549474 nova_compute[256753]: 2025-12-07 10:17:15.773 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  7 05:17:15 np0005549474 nova_compute[256753]: 2025-12-07 10:17:15.796 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  7 05:17:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:16.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:16 np0005549474 nova_compute[256753]: 2025-12-07 10:17:16.206 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:16 np0005549474 nova_compute[256753]: 2025-12-07 10:17:16.208 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:16 np0005549474 nova_compute[256753]: 2025-12-07 10:17:16.208 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:17:16 np0005549474 nova_compute[256753]: 2025-12-07 10:17:16.208 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:16 np0005549474 nova_compute[256753]: 2025-12-07 10:17:16.252 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:16 np0005549474 nova_compute[256753]: 2025-12-07 10:17:16.252 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:16 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:16Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:b8:27 10.100.0.12
Dec  7 05:17:16 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:16Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:b8:27 10.100.0.12
Dec  7 05:17:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:17.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec  7 05:17:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:18.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:17:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:18.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:17:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:18.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec  7 05:17:19 np0005549474 nova_compute[256753]: 2025-12-07 10:17:19.777 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:19] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  7 05:17:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:19] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  7 05:17:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:19 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:19 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:20.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:21 np0005549474 nova_compute[256753]: 2025-12-07 10:17:21.253 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:21 np0005549474 nova_compute[256753]: 2025-12-07 10:17:21.254 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:21 np0005549474 nova_compute[256753]: 2025-12-07 10:17:21.254 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:17:21 np0005549474 nova_compute[256753]: 2025-12-07 10:17:21.254 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:21 np0005549474 nova_compute[256753]: 2025-12-07 10:17:21.255 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:21 np0005549474 nova_compute[256753]: 2025-12-07 10:17:21.256 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  7 05:17:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:17:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:22.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:17:22 np0005549474 nova_compute[256753]: 2025-12-07 10:17:22.472 256757 INFO nova.compute.manager [None req-ac9b0d04-e035-499d-90d6-416121581ba4 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Get console output#033[00m
Dec  7 05:17:22 np0005549474 nova_compute[256753]: 2025-12-07 10:17:22.478 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:17:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  7 05:17:23 np0005549474 nova_compute[256753]: 2025-12-07 10:17:23.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:23 np0005549474 nova_compute[256753]: 2025-12-07 10:17:23.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.783 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:17:24 np0005549474 nova_compute[256753]: 2025-12-07 10:17:24.784 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:25 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:17:25 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745725524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:17:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:25.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.246 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.327 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.327 256757 DEBUG nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  7 05:17:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.531 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.532 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4387MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.532 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.533 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.709 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Instance 3fa72663-9aaa-4e36-92ba-35bec3874b64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.709 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.709 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.791 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing inventories for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  7 05:17:25 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:25Z|00096|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.807 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:25 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:25Z|00097|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.890 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating ProviderTree inventory for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.891 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating inventory in ProviderTree for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.893 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.910 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing aggregate associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.928 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing trait associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  7 05:17:25 np0005549474 nova_compute[256753]: 2025-12-07 10:17:25.965 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:26 np0005549474 nova_compute[256753]: 2025-12-07 10:17:26.301 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:17:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151595899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:17:26 np0005549474 nova_compute[256753]: 2025-12-07 10:17:26.427 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:26 np0005549474 nova_compute[256753]: 2025-12-07 10:17:26.434 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:17:26 np0005549474 nova_compute[256753]: 2025-12-07 10:17:26.461 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:17:26 np0005549474 nova_compute[256753]: 2025-12-07 10:17:26.502 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:17:26 np0005549474 nova_compute[256753]: 2025-12-07 10:17:26.502 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:27.187Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:17:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:27.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:17:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:27.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:17:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:17:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  7 05:17:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:27 np0005549474 nova_compute[256753]: 2025-12-07 10:17:27.925 256757 INFO nova.compute.manager [None req-13824878-062e-41b1-867e-d952910a4c8b 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Get console output#033[00m
Dec  7 05:17:27 np0005549474 nova_compute[256753]: 2025-12-07 10:17:27.932 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:17:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:28.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:28 np0005549474 NetworkManager[49051]: <info>  [1765102648.8882] manager: (patch-br-int-to-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec  7 05:17:28 np0005549474 nova_compute[256753]: 2025-12-07 10:17:28.887 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:28 np0005549474 NetworkManager[49051]: <info>  [1765102648.8893] manager: (patch-provnet-a7a6bf42-a7fe-4d30-ae8f-c3b54df1c1d1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec  7 05:17:28 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:28Z|00098|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:28 np0005549474 nova_compute[256753]: 2025-12-07 10:17:28.901 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:28 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:28Z|00099|binding|INFO|Releasing lport 5b642542-2571-46d3-9207-cb94992e16e8 from this chassis (sb_readonly=0)
Dec  7 05:17:28 np0005549474 nova_compute[256753]: 2025-12-07 10:17:28.908 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:29.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  7 05:17:29 np0005549474 nova_compute[256753]: 2025-12-07 10:17:29.503 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:29 np0005549474 nova_compute[256753]: 2025-12-07 10:17:29.503 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:29 np0005549474 nova_compute[256753]: 2025-12-07 10:17:29.503 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:17:29 np0005549474 nova_compute[256753]: 2025-12-07 10:17:29.504 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:17:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:29] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  7 05:17:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:29] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  7 05:17:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.060 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.060 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquired lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.061 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.061 256757 DEBUG nova.objects.instance [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3fa72663-9aaa-4e36-92ba-35bec3874b64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:17:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:30.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.211 256757 INFO nova.compute.manager [None req-4f18b888-3b6d-4a06-8e8e-13b2d723271e 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Get console output#033[00m
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.218 263860 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  7 05:17:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:30.396 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:17:30 np0005549474 nova_compute[256753]: 2025-12-07 10:17:30.398 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:30 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:30.398 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.015 256757 DEBUG nova.compute.manager [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-changed-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.015 256757 DEBUG nova.compute.manager [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Refreshing instance network info cache due to event network-changed-9223ee94-eb58-4566-a91c-7a7f60d59c18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.016 256757 DEBUG oslo_concurrency.lockutils [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.092 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.093 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.094 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.094 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.095 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.097 256757 INFO nova.compute.manager [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Terminating instance#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.099 256757 DEBUG nova.compute.manager [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  7 05:17:31 np0005549474 kernel: tap9223ee94-eb (unregistering): left promiscuous mode
Dec  7 05:17:31 np0005549474 NetworkManager[49051]: <info>  [1765102651.1523] device (tap9223ee94-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  7 05:17:31 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:31Z|00100|binding|INFO|Releasing lport 9223ee94-eb58-4566-a91c-7a7f60d59c18 from this chassis (sb_readonly=0)
Dec  7 05:17:31 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:31Z|00101|binding|INFO|Setting lport 9223ee94-eb58-4566-a91c-7a7f60d59c18 down in Southbound
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.212 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 ovn_controller[154296]: 2025-12-07T10:17:31Z|00102|binding|INFO|Removing iface tap9223ee94-eb ovn-installed in OVS
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.214 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.221 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:b8:27 10.100.0.12'], port_security=['fa:16:3e:cd:b8:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3fa72663-9aaa-4e36-92ba-35bec3874b64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a093ddbd-a138-4cc9-8070-9676e9871fad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ad61a97ffab4252be3eafb028b560c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7881e6a0-531f-4397-bcfe-5415bb4b005e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1cb4825b-a70b-4cd9-948e-c2b1ee07b432, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>], logical_port=9223ee94-eb58-4566-a91c-7a7f60d59c18) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd8beb5c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.223 164143 INFO neutron.agent.ovn.metadata.agent [-] Port 9223ee94-eb58-4566-a91c-7a7f60d59c18 in datapath a093ddbd-a138-4cc9-8070-9676e9871fad unbound from our chassis#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.225 164143 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a093ddbd-a138-4cc9-8070-9676e9871fad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.226 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[74ccd1e8-08fe-4a60-a52c-acc93dbdc52b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.226 164143 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad namespace which is not needed anymore#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.233 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:31 np0005549474 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  7 05:17:31 np0005549474 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 14.702s CPU time.
Dec  7 05:17:31 np0005549474 systemd-machined[217882]: Machine qemu-6-instance-0000000d terminated.
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.303 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.306 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.335 256757 INFO nova.virt.libvirt.driver [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Instance destroyed successfully.#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.336 256757 DEBUG nova.objects.instance [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lazy-loading 'resources' on Instance uuid 3fa72663-9aaa-4e36-92ba-35bec3874b64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.355 256757 DEBUG nova.virt.libvirt.vif [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-07T10:16:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-557373625',display_name='tempest-TestNetworkBasicOps-server-557373625',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-557373625',id=13,image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNXNyWHj9S6lfXbeCvFgX7QQJufvo8qCJ9LG+J3+6BnRPzwyfimivnl8uswjid/75y6t8/fISiJqp8oI0Vd5NrT9xYGY23o63Vh0qqJI7sxx0apM6VnViNbUjvOZAim9Zg==',key_name='tempest-TestNetworkBasicOps-1527897413',keypairs=<?>,launch_index=0,launched_at=2025-12-07T10:17:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ad61a97ffab4252be3eafb028b560c1',ramdisk_id='',reservation_id='r-3ml0k7x5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='af7b5730-2fa9-449f-8ccb-a9519582f1b2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1175680372',owner_user_name='tempest-TestNetworkBasicOps-1175680372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-07T10:17:03Z,user_data=None,user_id='8f27cf20bf8c4429aa12589418a57e41',uuid=3fa72663-9aaa-4e36-92ba-35bec3874b64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.356 256757 DEBUG nova.network.os_vif_util [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converting VIF {"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.357 256757 DEBUG nova.network.os_vif_util [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.357 256757 DEBUG os_vif [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.360 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.361 256757 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9223ee94-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.363 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.365 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.367 256757 INFO os_vif [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:b8:27,bridge_name='br-int',has_traffic_filtering=True,id=9223ee94-eb58-4566-a91c-7a7f60d59c18,network=Network(a093ddbd-a138-4cc9-8070-9676e9871fad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9223ee94-eb')#033[00m
Dec  7 05:17:31 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [NOTICE]   (277360) : haproxy version is 2.8.14-c23fe91
Dec  7 05:17:31 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [NOTICE]   (277360) : path to executable is /usr/sbin/haproxy
Dec  7 05:17:31 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [WARNING]  (277360) : Exiting Master process...
Dec  7 05:17:31 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [ALERT]    (277360) : Current worker (277362) exited with code 143 (Terminated)
Dec  7 05:17:31 np0005549474 neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad[277356]: [WARNING]  (277360) : All workers exited. Exiting... (0)
Dec  7 05:17:31 np0005549474 systemd[1]: libpod-2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804.scope: Deactivated successfully.
Dec  7 05:17:31 np0005549474 podman[277589]: 2025-12-07 10:17:31.388279815 +0000 UTC m=+0.053309877 container died 2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:17:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804-userdata-shm.mount: Deactivated successfully.
Dec  7 05:17:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-48a087f8f8f9478edd45fd0a318efa0f05d5871ab192bf12c0fef749a5deec0b-merged.mount: Deactivated successfully.
Dec  7 05:17:31 np0005549474 podman[277589]: 2025-12-07 10:17:31.428977658 +0000 UTC m=+0.094007740 container cleanup 2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:17:31 np0005549474 systemd[1]: libpod-conmon-2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804.scope: Deactivated successfully.
Dec  7 05:17:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Dec  7 05:17:31 np0005549474 podman[277646]: 2025-12-07 10:17:31.504443664 +0000 UTC m=+0.054547279 container remove 2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.513 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[baef82d5-ea05-4131-a740-7fec6cb02ee9]: (4, ('Sun Dec  7 10:17:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad (2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804)\n2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804\nSun Dec  7 10:17:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad (2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804)\n2b45bc5c857eaf90b7210e63b9a1110bb2ecc5174e6baf1bf0c264efd2744804\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.515 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[375a028c-5045-4b6b-aeb3-b479a7ae2e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.516 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa093ddbd-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.519 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 kernel: tapa093ddbd-a0: left promiscuous mode
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.521 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.523 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[e3268b33-7dc9-4107-979c-09a11e1ac968]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.533 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.542 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[e80a1d5e-c647-49a3-8bc5-44d39853ec04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.544 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[0e65d7f8-2bfb-49cd-892a-9b760730f071]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.565 262215 DEBUG oslo.privsep.daemon [-] privsep: reply[5333f167-d348-4882-9239-0f2c03aac0c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458480, 'reachable_time': 27954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277662, 'error': None, 'target': 'ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 systemd[1]: run-netns-ovnmeta\x2da093ddbd\x2da138\x2d4cc9\x2d8070\x2d9676e9871fad.mount: Deactivated successfully.
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.570 164283 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a093ddbd-a138-4cc9-8070-9676e9871fad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  7 05:17:31 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:31.570 164283 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0b7664-11dc-4909-a8b8-06123c3226d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.809 256757 INFO nova.virt.libvirt.driver [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Deleting instance files /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64_del#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.811 256757 INFO nova.virt.libvirt.driver [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Deletion of /var/lib/nova/instances/3fa72663-9aaa-4e36-92ba-35bec3874b64_del complete#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.891 256757 INFO nova.compute.manager [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.892 256757 DEBUG oslo.service.loopingcall [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.893 256757 DEBUG nova.compute.manager [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  7 05:17:31 np0005549474 nova_compute[256753]: 2025-12-07 10:17:31.893 256757 DEBUG nova.network.neutron [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  7 05:17:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:32 np0005549474 nova_compute[256753]: 2025-12-07 10:17:32.135 256757 DEBUG nova.compute.manager [req-1177fddc-049f-44d5-bceb-92de0d6c12e6 req-697280f3-b14a-4801-b168-f85e4411eab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-vif-unplugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:32 np0005549474 nova_compute[256753]: 2025-12-07 10:17:32.136 256757 DEBUG oslo_concurrency.lockutils [req-1177fddc-049f-44d5-bceb-92de0d6c12e6 req-697280f3-b14a-4801-b168-f85e4411eab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:32 np0005549474 nova_compute[256753]: 2025-12-07 10:17:32.137 256757 DEBUG oslo_concurrency.lockutils [req-1177fddc-049f-44d5-bceb-92de0d6c12e6 req-697280f3-b14a-4801-b168-f85e4411eab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:32 np0005549474 nova_compute[256753]: 2025-12-07 10:17:32.137 256757 DEBUG oslo_concurrency.lockutils [req-1177fddc-049f-44d5-bceb-92de0d6c12e6 req-697280f3-b14a-4801-b168-f85e4411eab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:32 np0005549474 nova_compute[256753]: 2025-12-07 10:17:32.138 256757 DEBUG nova.compute.manager [req-1177fddc-049f-44d5-bceb-92de0d6c12e6 req-697280f3-b14a-4801-b168-f85e4411eab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] No waiting events found dispatching network-vif-unplugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:17:32 np0005549474 nova_compute[256753]: 2025-12-07 10:17:32.138 256757 DEBUG nova.compute.manager [req-1177fddc-049f-44d5-bceb-92de0d6c12e6 req-697280f3-b14a-4801-b168-f85e4411eab0 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-vif-unplugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  7 05:17:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.061 256757 DEBUG nova.network.neutron [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updating instance_info_cache with network_info: [{"id": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "address": "fa:16:3e:cd:b8:27", "network": {"id": "a093ddbd-a138-4cc9-8070-9676e9871fad", "bridge": "br-int", "label": "tempest-network-smoke--871517660", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ad61a97ffab4252be3eafb028b560c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9223ee94-eb", "ovs_interfaceid": "9223ee94-eb58-4566-a91c-7a7f60d59c18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.088 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Releasing lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.089 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.089 256757 DEBUG oslo_concurrency.lockutils [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquired lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.090 256757 DEBUG nova.network.neutron [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Refreshing network info cache for port 9223ee94-eb58-4566-a91c-7a7f60d59c18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.092 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.092 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:17:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:17:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:33.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.268 256757 INFO nova.network.neutron [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Port 9223ee94-eb58-4566-a91c-7a7f60d59c18 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.268 256757 DEBUG nova.network.neutron [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.303 256757 DEBUG oslo_concurrency.lockutils [req-21502181-c4ec-410b-a1ca-796d78b75223 req-d5f33680-0e26-43e4-b966-0acc799fab05 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Releasing lock "refresh_cache-3fa72663-9aaa-4e36-92ba-35bec3874b64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.306 256757 DEBUG nova.network.neutron [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.323 256757 INFO nova.compute.manager [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.374 256757 DEBUG nova.compute.manager [req-e99d1676-ff01-4c1c-8999-e361e70fa20a req-5094537f-e5d4-45ce-bb5a-997e8cd8cd74 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-vif-deleted-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.376 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.376 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.424 256757 DEBUG oslo_concurrency.processutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:17:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Dec  7 05:17:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:17:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/861001628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.975 256757 DEBUG oslo_concurrency.processutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.983 256757 DEBUG nova.compute.provider_tree [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:17:33 np0005549474 nova_compute[256753]: 2025-12-07 10:17:33.997 256757 DEBUG nova.scheduler.client.report [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.020 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.052 256757 INFO nova.scheduler.client.report [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Deleted allocations for instance 3fa72663-9aaa-4e36-92ba-35bec3874b64#033[00m
Dec  7 05:17:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:34.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.106 256757 DEBUG oslo_concurrency.lockutils [None req-2a6b1510-6fad-4069-87c6-db02435974e5 8f27cf20bf8c4429aa12589418a57e41 2ad61a97ffab4252be3eafb028b560c1 - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.214 256757 DEBUG nova.compute.manager [req-3830e099-abc0-4bf3-a840-3fe1982011f6 req-655f1744-91db-4793-8311-426fbc9cc9f9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.214 256757 DEBUG oslo_concurrency.lockutils [req-3830e099-abc0-4bf3-a840-3fe1982011f6 req-655f1744-91db-4793-8311-426fbc9cc9f9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Acquiring lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.215 256757 DEBUG oslo_concurrency.lockutils [req-3830e099-abc0-4bf3-a840-3fe1982011f6 req-655f1744-91db-4793-8311-426fbc9cc9f9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.215 256757 DEBUG oslo_concurrency.lockutils [req-3830e099-abc0-4bf3-a840-3fe1982011f6 req-655f1744-91db-4793-8311-426fbc9cc9f9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] Lock "3fa72663-9aaa-4e36-92ba-35bec3874b64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.215 256757 DEBUG nova.compute.manager [req-3830e099-abc0-4bf3-a840-3fe1982011f6 req-655f1744-91db-4793-8311-426fbc9cc9f9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] No waiting events found dispatching network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  7 05:17:34 np0005549474 nova_compute[256753]: 2025-12-07 10:17:34.215 256757 WARNING nova.compute.manager [req-3830e099-abc0-4bf3-a840-3fe1982011f6 req-655f1744-91db-4793-8311-426fbc9cc9f9 ce04b07a6481419ca693369324f8f81a e34ec7d7242a45c2a381d8d2c72bf7bc - - default default] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Received unexpected event network-vif-plugged-9223ee94-eb58-4566-a91c-7a7f60d59c18 for instance with vm_state deleted and task_state None.#033[00m
Dec  7 05:17:34 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:34.401 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:17:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:17:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:17:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec  7 05:17:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:36 np0005549474 podman[277692]: 2025-12-07 10:17:36.285926198 +0000 UTC m=+0.091332559 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  7 05:17:36 np0005549474 nova_compute[256753]: 2025-12-07 10:17:36.338 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:36 np0005549474 nova_compute[256753]: 2025-12-07 10:17:36.364 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:36 np0005549474 podman[277693]: 2025-12-07 10:17:36.36381096 +0000 UTC m=+0.164438411 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:17:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:37.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:37.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec  7 05:17:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:37 np0005549474 podman[277864]: 2025-12-07 10:17:37.880345024 +0000 UTC m=+0.095774298 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 05:17:37 np0005549474 podman[277864]: 2025-12-07 10:17:37.990318186 +0000 UTC m=+0.205747380 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  7 05:17:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:17:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:17:38 np0005549474 nova_compute[256753]: 2025-12-07 10:17:38.332 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:38 np0005549474 nova_compute[256753]: 2025-12-07 10:17:38.445 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:38.629 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:17:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:38.630 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:17:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:17:38.630 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:17:38 np0005549474 podman[277987]: 2025-12-07 10:17:38.641244318 +0000 UTC m=+0.081124401 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:38 np0005549474 podman[278012]: 2025-12-07 10:17:38.749491243 +0000 UTC m=+0.081735868 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:38 np0005549474 podman[277987]: 2025-12-07 10:17:38.757017537 +0000 UTC m=+0.196897620 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:38.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:17:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:38.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:17:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:38.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:17:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:39.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:39 np0005549474 podman[278081]: 2025-12-07 10:17:39.262584577 +0000 UTC m=+0.062489606 container exec a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 05:17:39 np0005549474 podman[278081]: 2025-12-07 10:17:39.274534451 +0000 UTC m=+0.074439450 container exec_died a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:17:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Dec  7 05:17:39 np0005549474 podman[278148]: 2025-12-07 10:17:39.557667739 +0000 UTC m=+0.067519662 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 05:17:39 np0005549474 podman[278148]: 2025-12-07 10:17:39.573611211 +0000 UTC m=+0.083463134 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 05:17:39 np0005549474 podman[278214]: 2025-12-07 10:17:39.859053251 +0000 UTC m=+0.074112580 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, build-date=2023-02-22T09:23:20, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  7 05:17:39 np0005549474 podman[278214]: 2025-12-07 10:17:39.882720653 +0000 UTC m=+0.097779912 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, io.openshift.expose-services=)
Dec  7 05:17:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:39] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  7 05:17:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:39] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec  7 05:17:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:40.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:40 np0005549474 podman[278282]: 2025-12-07 10:17:40.20895796 +0000 UTC m=+0.086184608 container exec d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:40 np0005549474 podman[278282]: 2025-12-07 10:17:40.252186682 +0000 UTC m=+0.129413270 container exec_died d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:40 np0005549474 podman[278325]: 2025-12-07 10:17:40.409452487 +0000 UTC m=+0.076034053 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  7 05:17:40 np0005549474 podman[278374]: 2025-12-07 10:17:40.565387676 +0000 UTC m=+0.072360293 container exec d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 05:17:40 np0005549474 podman[278374]: 2025-12-07 10:17:40.760444555 +0000 UTC m=+0.267417182 container exec_died d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 05:17:41 np0005549474 podman[278487]: 2025-12-07 10:17:41.241830249 +0000 UTC m=+0.068891080 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:41.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:41 np0005549474 podman[278487]: 2025-12-07 10:17:41.301698982 +0000 UTC m=+0.128759843 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:17:41 np0005549474 nova_compute[256753]: 2025-12-07 10:17:41.371 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:17:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:17:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec  7 05:17:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:17:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.6 KiB/s wr, 32 op/s
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:17:42
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.nfs', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'vms']
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:17:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:42 np0005549474 podman[278706]: 2025-12-07 10:17:42.9197555 +0000 UTC m=+0.057197361 container create c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_babbage, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:17:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:17:42 np0005549474 systemd[1]: Started libpod-conmon-c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd.scope.
Dec  7 05:17:42 np0005549474 podman[278706]: 2025-12-07 10:17:42.899743708 +0000 UTC m=+0.037185589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:17:43 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:43 np0005549474 podman[278706]: 2025-12-07 10:17:43.022951579 +0000 UTC m=+0.160393510 container init c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_babbage, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:17:43 np0005549474 podman[278706]: 2025-12-07 10:17:43.03108741 +0000 UTC m=+0.168529271 container start c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 05:17:43 np0005549474 podman[278706]: 2025-12-07 10:17:43.034702538 +0000 UTC m=+0.172144489 container attach c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_babbage, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 05:17:43 np0005549474 dazzling_babbage[278723]: 167 167
Dec  7 05:17:43 np0005549474 systemd[1]: libpod-c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd.scope: Deactivated successfully.
Dec  7 05:17:43 np0005549474 conmon[278723]: conmon c159620f924edb492c31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd.scope/container/memory.events
Dec  7 05:17:43 np0005549474 podman[278706]: 2025-12-07 10:17:43.041479111 +0000 UTC m=+0.178921002 container died c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_babbage, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:17:43 np0005549474 systemd[1]: var-lib-containers-storage-overlay-38a081623717987d7b55f193db9821ff468ec1685e45710f8d4d6ebe8863bc40-merged.mount: Deactivated successfully.
Dec  7 05:17:43 np0005549474 podman[278706]: 2025-12-07 10:17:43.092126765 +0000 UTC m=+0.229568656 container remove c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Dec  7 05:17:43 np0005549474 systemd[1]: libpod-conmon-c159620f924edb492c31a0524eb98a059ec1784e75aa00834ab198f7fdb8c7fd.scope: Deactivated successfully.
Dec  7 05:17:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:43.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.34309354 +0000 UTC m=+0.071998823 container create 72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.311085023 +0000 UTC m=+0.039990366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:17:43 np0005549474 systemd[1]: Started libpod-conmon-72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8.scope.
Dec  7 05:17:43 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b3a4e481960b54a09a55494548c72f9cf9f991cd074c621b0e21462218736/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b3a4e481960b54a09a55494548c72f9cf9f991cd074c621b0e21462218736/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b3a4e481960b54a09a55494548c72f9cf9f991cd074c621b0e21462218736/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b3a4e481960b54a09a55494548c72f9cf9f991cd074c621b0e21462218736/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:43 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b3a4e481960b54a09a55494548c72f9cf9f991cd074c621b0e21462218736/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.484139745 +0000 UTC m=+0.213045068 container init 72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.497290562 +0000 UTC m=+0.226195845 container start 72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.501194648 +0000 UTC m=+0.230099981 container attach 72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:17:43 np0005549474 stoic_cray[278765]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:17:43 np0005549474 stoic_cray[278765]: --> All data devices are unavailable
Dec  7 05:17:43 np0005549474 systemd[1]: libpod-72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8.scope: Deactivated successfully.
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.90064753 +0000 UTC m=+0.629552813 container died 72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:17:43 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5e7b3a4e481960b54a09a55494548c72f9cf9f991cd074c621b0e21462218736-merged.mount: Deactivated successfully.
Dec  7 05:17:43 np0005549474 podman[278747]: 2025-12-07 10:17:43.963393022 +0000 UTC m=+0.692298275 container remove 72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 05:17:43 np0005549474 systemd[1]: libpod-conmon-72a74146b8d63947a5f9abe9aec1a7508f983a5adaedd8da73210246a707dbb8.scope: Deactivated successfully.
Dec  7 05:17:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:17:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:17:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:17:44 np0005549474 podman[278882]: 2025-12-07 10:17:44.649457636 +0000 UTC m=+0.066166855 container create 7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_morse, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:17:44 np0005549474 systemd[1]: Started libpod-conmon-7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2.scope.
Dec  7 05:17:44 np0005549474 podman[278882]: 2025-12-07 10:17:44.623537243 +0000 UTC m=+0.040246502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:17:44 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:44 np0005549474 podman[278882]: 2025-12-07 10:17:44.762742268 +0000 UTC m=+0.179451537 container init 7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 05:17:44 np0005549474 podman[278882]: 2025-12-07 10:17:44.77538628 +0000 UTC m=+0.192095499 container start 7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_morse, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:17:44 np0005549474 awesome_morse[278898]: 167 167
Dec  7 05:17:44 np0005549474 systemd[1]: libpod-7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2.scope: Deactivated successfully.
Dec  7 05:17:44 np0005549474 podman[278882]: 2025-12-07 10:17:44.786853571 +0000 UTC m=+0.203562830 container attach 7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_morse, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:17:44 np0005549474 podman[278882]: 2025-12-07 10:17:44.787272454 +0000 UTC m=+0.203981673 container died 7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 05:17:44 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a90727e80c58265181a58068cf65f13f9443809394fb7a3f29f14696d36e6c4f-merged.mount: Deactivated successfully.
Dec  7 05:17:45 np0005549474 podman[278882]: 2025-12-07 10:17:44.999960331 +0000 UTC m=+0.416669550 container remove 7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 05:17:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:45 np0005549474 systemd[1]: libpod-conmon-7596cf1847c741b1dafa2efe6d7e95ddb84cbb000fed8d8ebfeaa9859361acb2.scope: Deactivated successfully.
Dec  7 05:17:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:45.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.274595759 +0000 UTC m=+0.057107460 container create 1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_pasteur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:17:45 np0005549474 systemd[1]: Started libpod-conmon-1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa.scope.
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.256146918 +0000 UTC m=+0.038658609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:17:45 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:45 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe37f779ab2982aa3e6249bad7e0dedcef8f5930bc76bf03f208e1264c3db57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:45 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe37f779ab2982aa3e6249bad7e0dedcef8f5930bc76bf03f208e1264c3db57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:45 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe37f779ab2982aa3e6249bad7e0dedcef8f5930bc76bf03f208e1264c3db57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:45 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe37f779ab2982aa3e6249bad7e0dedcef8f5930bc76bf03f208e1264c3db57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.378384703 +0000 UTC m=+0.160896404 container init 1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.390320517 +0000 UTC m=+0.172832208 container start 1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.393798491 +0000 UTC m=+0.176310202 container attach 1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]: {
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:    "0": [
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:        {
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "devices": [
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "/dev/loop3"
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            ],
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "lv_name": "ceph_lv0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "lv_size": "21470642176",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "name": "ceph_lv0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "tags": {
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.cluster_name": "ceph",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.crush_device_class": "",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.encrypted": "0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.osd_id": "0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.type": "block",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.vdo": "0",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:                "ceph.with_tpm": "0"
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            },
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "type": "block",
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:            "vg_name": "ceph_vg0"
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:        }
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]:    ]
Dec  7 05:17:45 np0005549474 loving_pasteur[278940]: }
Dec  7 05:17:45 np0005549474 systemd[1]: libpod-1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa.scope: Deactivated successfully.
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.761488381 +0000 UTC m=+0.544000052 container died 1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:17:45 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ffe37f779ab2982aa3e6249bad7e0dedcef8f5930bc76bf03f208e1264c3db57-merged.mount: Deactivated successfully.
Dec  7 05:17:45 np0005549474 podman[278924]: 2025-12-07 10:17:45.834751969 +0000 UTC m=+0.617263660 container remove 1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:17:45 np0005549474 systemd[1]: libpod-conmon-1d1ca6ee26a1546aad73420b386dc0030b5406603c2233aa49654726c172e8fa.scope: Deactivated successfully.
Dec  7 05:17:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.335 256757 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765102651.3339167, 3fa72663-9aaa-4e36-92ba-35bec3874b64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.336 256757 INFO nova.compute.manager [-] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] VM Stopped (Lifecycle Event)#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.364 256757 DEBUG nova.compute.manager [None req-d0650694-fbf7-4d9c-973a-04ac54ec3896 - - - - - -] [instance: 3fa72663-9aaa-4e36-92ba-35bec3874b64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.373 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.375 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.375 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.375 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.421 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:46 np0005549474 nova_compute[256753]: 2025-12-07 10:17:46.422 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.591415738 +0000 UTC m=+0.053479481 container create ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sanderson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:17:46 np0005549474 systemd[1]: Started libpod-conmon-ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545.scope.
Dec  7 05:17:46 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.576073611 +0000 UTC m=+0.038137374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.678135139 +0000 UTC m=+0.140198882 container init ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.686183777 +0000 UTC m=+0.148247530 container start ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sanderson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.690043622 +0000 UTC m=+0.152107395 container attach ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sanderson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:17:46 np0005549474 zealous_sanderson[279094]: 167 167
Dec  7 05:17:46 np0005549474 systemd[1]: libpod-ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545.scope: Deactivated successfully.
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.692237032 +0000 UTC m=+0.154300795 container died ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:17:46 np0005549474 systemd[1]: var-lib-containers-storage-overlay-93c5bbb2394e2b520083b4f2f17d4ab97e65b0c6a431e1c224e5f9453395f64e-merged.mount: Deactivated successfully.
Dec  7 05:17:46 np0005549474 podman[279078]: 2025-12-07 10:17:46.730796077 +0000 UTC m=+0.192859830 container remove ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:17:46 np0005549474 systemd[1]: libpod-conmon-ba643186fb7e06ac50fa782bc9bde0b13b7b15f082591febc5cc2c511378b545.scope: Deactivated successfully.
Dec  7 05:17:46 np0005549474 podman[279121]: 2025-12-07 10:17:46.946959639 +0000 UTC m=+0.055801144 container create c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ritchie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:17:46 np0005549474 systemd[1]: Started libpod-conmon-c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b.scope.
Dec  7 05:17:47 np0005549474 podman[279121]: 2025-12-07 10:17:46.922786514 +0000 UTC m=+0.031628039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:17:47 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:17:47 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f80a50c1139096a7a361843f1ce1e8d552cc65e78aeb3cf348ae0d57473e2e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:47 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f80a50c1139096a7a361843f1ce1e8d552cc65e78aeb3cf348ae0d57473e2e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:47 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f80a50c1139096a7a361843f1ce1e8d552cc65e78aeb3cf348ae0d57473e2e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:47 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f80a50c1139096a7a361843f1ce1e8d552cc65e78aeb3cf348ae0d57473e2e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:17:47 np0005549474 podman[279121]: 2025-12-07 10:17:47.055673637 +0000 UTC m=+0.164515202 container init c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:17:47 np0005549474 podman[279121]: 2025-12-07 10:17:47.069946324 +0000 UTC m=+0.178787829 container start c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:17:47 np0005549474 podman[279121]: 2025-12-07 10:17:47.074122197 +0000 UTC m=+0.182963702 container attach c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:17:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:47.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:47.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:47 np0005549474 lvm[279213]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:17:47 np0005549474 lvm[279213]: VG ceph_vg0 finished
Dec  7 05:17:47 np0005549474 ecstatic_ritchie[279138]: {}
Dec  7 05:17:47 np0005549474 systemd[1]: libpod-c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b.scope: Deactivated successfully.
Dec  7 05:17:47 np0005549474 systemd[1]: libpod-c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b.scope: Consumed 1.405s CPU time.
Dec  7 05:17:47 np0005549474 podman[279121]: 2025-12-07 10:17:47.934447637 +0000 UTC m=+1.043289102 container died c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:17:47 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6f80a50c1139096a7a361843f1ce1e8d552cc65e78aeb3cf348ae0d57473e2e2-merged.mount: Deactivated successfully.
Dec  7 05:17:47 np0005549474 podman[279121]: 2025-12-07 10:17:47.978358308 +0000 UTC m=+1.087199793 container remove c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ritchie, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:17:47 np0005549474 systemd[1]: libpod-conmon-c9f2d88a7bd12f68f8415ea6e1ded8921a8a22213ec6a0f7224e7783d69ce99b.scope: Deactivated successfully.
Dec  7 05:17:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:17:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:17:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:48.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:17:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:48.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:49 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:17:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:49.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:49] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  7 05:17:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:49] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  7 05:17:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:17:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:51.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:51 np0005549474 nova_compute[256753]: 2025-12-07 10:17:51.423 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:51 np0005549474 nova_compute[256753]: 2025-12-07 10:17:51.425 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:51 np0005549474 nova_compute[256753]: 2025-12-07 10:17:51.425 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:17:51 np0005549474 nova_compute[256753]: 2025-12-07 10:17:51.425 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:51 np0005549474 nova_compute[256753]: 2025-12-07 10:17:51.431 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:51 np0005549474 nova_compute[256753]: 2025-12-07 10:17:51.431 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:52.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:17:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:17:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:17:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:17:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:17:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:17:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:55.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:56.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:17:56 np0005549474 nova_compute[256753]: 2025-12-07 10:17:56.432 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:56 np0005549474 nova_compute[256753]: 2025-12-07 10:17:56.434 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:17:56 np0005549474 nova_compute[256753]: 2025-12-07 10:17:56.434 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:17:56 np0005549474 nova_compute[256753]: 2025-12-07 10:17:56.435 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:56 np0005549474 nova_compute[256753]: 2025-12-07 10:17:56.472 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:17:56 np0005549474 nova_compute[256753]: 2025-12-07 10:17:56.473 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:17:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:57.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:17:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:17:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:57.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:17:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:17:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:17:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:17:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:58.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:17:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:58.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:17:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:17:58.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:17:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:17:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:17:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:59.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:17:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:59] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:17:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:17:59] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:18:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:17:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:00 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:00.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:18:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:01.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:18:01 np0005549474 nova_compute[256753]: 2025-12-07 10:18:01.474 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:01 np0005549474 nova_compute[256753]: 2025-12-07 10:18:01.475 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:01 np0005549474 nova_compute[256753]: 2025-12-07 10:18:01.475 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:01 np0005549474 nova_compute[256753]: 2025-12-07 10:18:01.475 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:01 np0005549474 nova_compute[256753]: 2025-12-07 10:18:01.476 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:01 np0005549474 nova_compute[256753]: 2025-12-07 10:18:01.478 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:05 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:18:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:05.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:18:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:06 np0005549474 nova_compute[256753]: 2025-12-07 10:18:06.480 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:06 np0005549474 nova_compute[256753]: 2025-12-07 10:18:06.481 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:06 np0005549474 nova_compute[256753]: 2025-12-07 10:18:06.481 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:06 np0005549474 nova_compute[256753]: 2025-12-07 10:18:06.481 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:06 np0005549474 nova_compute[256753]: 2025-12-07 10:18:06.536 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:06 np0005549474 nova_compute[256753]: 2025-12-07 10:18:06.537 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:07.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:07.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:07 np0005549474 podman[279299]: 2025-12-07 10:18:07.302189759 +0000 UTC m=+0.095923433 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  7 05:18:07 np0005549474 podman[279300]: 2025-12-07 10:18:07.319450497 +0000 UTC m=+0.112974885 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  7 05:18:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:08.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:18:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:08.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:18:09 np0005549474 ovn_controller[154296]: 2025-12-07T10:18:09Z|00103|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  7 05:18:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:09.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:09] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:18:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:09] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:18:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:10 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:11 np0005549474 podman[279352]: 2025-12-07 10:18:11.260707226 +0000 UTC m=+0.063551145 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:18:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:11 np0005549474 nova_compute[256753]: 2025-12-07 10:18:11.538 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:11 np0005549474 nova_compute[256753]: 2025-12-07 10:18:11.540 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:11 np0005549474 nova_compute[256753]: 2025-12-07 10:18:11.540 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:11 np0005549474 nova_compute[256753]: 2025-12-07 10:18:11.540 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:11 np0005549474 nova_compute[256753]: 2025-12-07 10:18:11.540 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:11 np0005549474 nova_compute[256753]: 2025-12-07 10:18:11.541 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:18:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:18:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:18:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:13.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:15 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:16.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:16 np0005549474 nova_compute[256753]: 2025-12-07 10:18:16.541 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:16 np0005549474 nova_compute[256753]: 2025-12-07 10:18:16.542 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:16 np0005549474 nova_compute[256753]: 2025-12-07 10:18:16.542 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:16 np0005549474 nova_compute[256753]: 2025-12-07 10:18:16.542 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:16 np0005549474 nova_compute[256753]: 2025-12-07 10:18:16.543 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:17.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:17.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:19.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:19 np0005549474 nova_compute[256753]: 2025-12-07 10:18:19.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:19] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  7 05:18:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:19] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  7 05:18:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:19 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:19 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:19 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:20 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:21.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:21 np0005549474 nova_compute[256753]: 2025-12-07 10:18:21.544 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:23.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:24 np0005549474 nova_compute[256753]: 2025-12-07 10:18:24.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:24 np0005549474 nova_compute[256753]: 2025-12-07 10:18:24.755 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:25.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.752 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.783 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.783 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:18:25 np0005549474 nova_compute[256753]: 2025-12-07 10:18:25.783 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:18:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:18:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978054548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.286 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.461 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.463 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.463 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.464 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.536 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.536 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.545 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.547 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.547 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.547 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.549 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.585 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:26 np0005549474 nova_compute[256753]: 2025-12-07 10:18:26.586 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:18:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419608147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:18:27 np0005549474 nova_compute[256753]: 2025-12-07 10:18:27.013 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:18:27 np0005549474 nova_compute[256753]: 2025-12-07 10:18:27.017 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:18:27 np0005549474 nova_compute[256753]: 2025-12-07 10:18:27.034 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:18:27 np0005549474 nova_compute[256753]: 2025-12-07 10:18:27.067 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:18:27 np0005549474 nova_compute[256753]: 2025-12-07 10:18:27.068 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:18:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:27.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:27.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:18:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:18:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:28.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:29] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:18:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:29] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:18:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.064 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.065 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.065 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.065 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.094 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.094 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:30.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:30 np0005549474 nova_compute[256753]: 2025-12-07 10:18:30.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:31 np0005549474 nova_compute[256753]: 2025-12-07 10:18:31.586 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:31 np0005549474 nova_compute[256753]: 2025-12-07 10:18:31.588 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:34.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:36.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.590 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.591 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.592 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.592 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.631 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.631 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:36 np0005549474 nova_compute[256753]: 2025-12-07 10:18:36.748 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:18:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:37.192Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:18:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:37.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:37.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:37 np0005549474 systemd-logind[796]: New session 56 of user zuul.
Dec  7 05:18:37 np0005549474 systemd[1]: Started Session 56 of User zuul.
Dec  7 05:18:37 np0005549474 podman[279473]: 2025-12-07 10:18:37.532271211 +0000 UTC m=+0.096969731 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  7 05:18:37 np0005549474 podman[279475]: 2025-12-07 10:18:37.606259507 +0000 UTC m=+0.170530275 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  7 05:18:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:18:38.630 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:18:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:18:38.631 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:18:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:18:38.631 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:18:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:38.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:18:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:18:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:39.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:39] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:18:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:39] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:18:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:40 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26170 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:40 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25793 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:40 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16668 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:40 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26182 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25802 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:41 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16680 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:41 np0005549474 nova_compute[256753]: 2025-12-07 10:18:41.632 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:41 np0005549474 nova_compute[256753]: 2025-12-07 10:18:41.634 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:41 np0005549474 nova_compute[256753]: 2025-12-07 10:18:41.635 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:41 np0005549474 nova_compute[256753]: 2025-12-07 10:18:41.635 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:41 np0005549474 nova_compute[256753]: 2025-12-07 10:18:41.657 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:41 np0005549474 nova_compute[256753]: 2025-12-07 10:18:41.658 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:41 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  7 05:18:41 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015966285' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  7 05:18:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:18:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:42 np0005549474 podman[279770]: 2025-12-07 10:18:42.274516119 +0000 UTC m=+0.082442337 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  7 05:18:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:18:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:18:42
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'images', 'vms', '.nfs', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr']
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:18:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:18:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:18:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:43.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:18:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:45.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:46 np0005549474 nova_compute[256753]: 2025-12-07 10:18:46.659 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:46 np0005549474 nova_compute[256753]: 2025-12-07 10:18:46.660 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:46 np0005549474 nova_compute[256753]: 2025-12-07 10:18:46.660 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:46 np0005549474 nova_compute[256753]: 2025-12-07 10:18:46.660 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:46 np0005549474 nova_compute[256753]: 2025-12-07 10:18:46.709 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:46 np0005549474 nova_compute[256753]: 2025-12-07 10:18:46.710 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:47.193Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:18:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:47.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:18:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:48 np0005549474 ovs-vsctl[279932]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  7 05:18:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:48.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:18:49 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:18:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec  7 05:18:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:18:49 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:49.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:18:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:18:49 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  7 05:18:49 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: cache status {prefix=cache status} (starting...)
Dec  7 05:18:49 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:49 np0005549474 podman[280368]: 2025-12-07 10:18:49.91169425 +0000 UTC m=+0.046941934 container create 5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mcclintock, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:18:49 np0005549474 systemd[1]: Started libpod-conmon-5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0.scope.
Dec  7 05:18:49 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:18:49 np0005549474 podman[280368]: 2025-12-07 10:18:49.892085219 +0000 UTC m=+0.027332933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:18:49 np0005549474 podman[280368]: 2025-12-07 10:18:49.987095475 +0000 UTC m=+0.122343179 container init 5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mcclintock, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 05:18:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:49] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:18:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:49] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:18:49 np0005549474 podman[280368]: 2025-12-07 10:18:49.993775406 +0000 UTC m=+0.129023090 container start 5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 05:18:49 np0005549474 podman[280368]: 2025-12-07 10:18:49.996983723 +0000 UTC m=+0.132231407 container attach 5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:18:50 np0005549474 objective_mcclintock[280410]: 167 167
Dec  7 05:18:50 np0005549474 systemd[1]: libpod-5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0.scope: Deactivated successfully.
Dec  7 05:18:50 np0005549474 podman[280368]: 2025-12-07 10:18:50.002163033 +0000 UTC m=+0.137410737 container died 5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: client ls {prefix=client ls} (starting...)
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:50 np0005549474 lvm[280431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:18:50 np0005549474 lvm[280431]: VG ceph_vg0 finished
Dec  7 05:18:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4b92b11c398fedb35fe3e8644a3aa9e4170c70646bbb53a6b91b8d8c3c5fe609-merged.mount: Deactivated successfully.
Dec  7 05:18:50 np0005549474 podman[280368]: 2025-12-07 10:18:50.049861437 +0000 UTC m=+0.185109121 container remove 5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 05:18:50 np0005549474 systemd[1]: libpod-conmon-5a8b63575e3880e5bd717a886fa2d75601049d464ec53e358b06e07c64c07ef0.scope: Deactivated successfully.
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.207355048 +0000 UTC m=+0.041884197 container create a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_newton, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 05:18:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:50.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:50 np0005549474 systemd[1]: Started libpod-conmon-a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48.scope.
Dec  7 05:18:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:18:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8851e573c2a63263b799828c36e90f3d1a0cf488583417db01c7011b293789/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8851e573c2a63263b799828c36e90f3d1a0cf488583417db01c7011b293789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8851e573c2a63263b799828c36e90f3d1a0cf488583417db01c7011b293789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8851e573c2a63263b799828c36e90f3d1a0cf488583417db01c7011b293789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8851e573c2a63263b799828c36e90f3d1a0cf488583417db01c7011b293789/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.282347661 +0000 UTC m=+0.116876830 container init a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.189014531 +0000 UTC m=+0.023543710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.29079481 +0000 UTC m=+0.125323949 container start a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.294154541 +0000 UTC m=+0.128683710 container attach a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:18:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25820 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:50 np0005549474 laughing_newton[280527]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:18:50 np0005549474 laughing_newton[280527]: --> All data devices are unavailable
Dec  7 05:18:50 np0005549474 systemd[1]: libpod-a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48.scope: Deactivated successfully.
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.612063073 +0000 UTC m=+0.446592222 container died a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_newton, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:18:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0a8851e573c2a63263b799828c36e90f3d1a0cf488583417db01c7011b293789-merged.mount: Deactivated successfully.
Dec  7 05:18:50 np0005549474 podman[280483]: 2025-12-07 10:18:50.656327833 +0000 UTC m=+0.490856982 container remove a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:18:50 np0005549474 systemd[1]: libpod-conmon-a5b640294f92bd9943fb7309caf044f4b408defd7fc5a3b9ac915861f63afa48.scope: Deactivated successfully.
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: damage ls {prefix=damage ls} (starting...)
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16704 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump loads {prefix=dump loads} (starting...)
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768533798' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  7 05:18:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  7 05:18:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25841 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  7 05:18:50 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16722 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26212 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.254118234 +0000 UTC m=+0.039601355 container create 1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:18:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:18:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780908691' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:18:51 np0005549474 systemd[1]: Started libpod-conmon-1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92.scope.
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec  7 05:18:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25865 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.235167819 +0000 UTC m=+0.020650950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:18:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:51.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.344484774 +0000 UTC m=+0.129967925 container init 1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.35170497 +0000 UTC m=+0.137188091 container start 1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bose, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:18:51 np0005549474 heuristic_bose[280824]: 167 167
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.355578685 +0000 UTC m=+0.141061816 container attach 1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:18:51 np0005549474 systemd[1]: libpod-1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92.scope: Deactivated successfully.
Dec  7 05:18:51 np0005549474 conmon[280824]: conmon 1575af1513acfceae922 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92.scope/container/memory.events
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.36018251 +0000 UTC m=+0.145665641 container died 1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bose, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-7380dd4d21b2003bb23bd016a470a34c1a95ea4719c705528972f5012815f9b4-merged.mount: Deactivated successfully.
Dec  7 05:18:51 np0005549474 podman[280801]: 2025-12-07 10:18:51.393238117 +0000 UTC m=+0.178721238 container remove 1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bose, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:18:51 np0005549474 systemd[1]: libpod-conmon-1575af1513acfceae9222f1f5301c491c3758c2eaa4017d5f92c0b98e5550a92.scope: Deactivated successfully.
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.543502931 +0000 UTC m=+0.038942357 container create 40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16740 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:51 np0005549474 systemd[1]: Started libpod-conmon-40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f.scope.
Dec  7 05:18:51 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:18:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45361ad4c08a400eaab31201f0f8234774b11696037acf089afae5104ee9980/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45361ad4c08a400eaab31201f0f8234774b11696037acf089afae5104ee9980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45361ad4c08a400eaab31201f0f8234774b11696037acf089afae5104ee9980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:51 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45361ad4c08a400eaab31201f0f8234774b11696037acf089afae5104ee9980/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.527495357 +0000 UTC m=+0.022934803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.627961482 +0000 UTC m=+0.123400938 container init 40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Dec  7 05:18:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  7 05:18:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851089738' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  7 05:18:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  7 05:18:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.64226738 +0000 UTC m=+0.137706806 container start 40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26227 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.645405535 +0000 UTC m=+0.140844961 container attach 40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:18:51 np0005549474 nova_compute[256753]: 2025-12-07 10:18:51.711 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:51 np0005549474 nova_compute[256753]: 2025-12-07 10:18:51.713 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:18:51 np0005549474 nova_compute[256753]: 2025-12-07 10:18:51.714 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:18:51 np0005549474 nova_compute[256753]: 2025-12-07 10:18:51.714 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25880 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:51 np0005549474 nova_compute[256753]: 2025-12-07 10:18:51.766 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:51 np0005549474 nova_compute[256753]: 2025-12-07 10:18:51.767 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  7 05:18:51 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]: {
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:    "0": [
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:        {
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "devices": [
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "/dev/loop3"
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            ],
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "lv_name": "ceph_lv0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "lv_size": "21470642176",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "name": "ceph_lv0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "tags": {
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.cluster_name": "ceph",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.crush_device_class": "",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.encrypted": "0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.osd_id": "0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.type": "block",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.vdo": "0",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:                "ceph.with_tpm": "0"
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            },
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "type": "block",
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:            "vg_name": "ceph_vg0"
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:        }
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]:    ]
Dec  7 05:18:51 np0005549474 nice_ganguly[280907]: }
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.9527868 +0000 UTC m=+0.448226236 container died 40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:18:51 np0005549474 systemd[1]: libpod-40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f.scope: Deactivated successfully.
Dec  7 05:18:51 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d45361ad4c08a400eaab31201f0f8234774b11696037acf089afae5104ee9980-merged.mount: Deactivated successfully.
Dec  7 05:18:51 np0005549474 podman[280888]: 2025-12-07 10:18:51.998229733 +0000 UTC m=+0.493669159 container remove 40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 05:18:52 np0005549474 systemd[1]: libpod-conmon-40674b2b59941d97f768700e398c20578837b93fb06c621f3cd3614cb746f32f.scope: Deactivated successfully.
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26242 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16773 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: ops {prefix=ops} (starting...)
Dec  7 05:18:52 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127122219' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  7 05:18:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.003000080s ======
Dec  7 05:18:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:52.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/223410288' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26254 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16800 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2001357645' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25916 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.553325385 +0000 UTC m=+0.037341233 container create 6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 05:18:52 np0005549474 systemd[1]: Started libpod-conmon-6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf.scope.
Dec  7 05:18:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.62539688 +0000 UTC m=+0.109412758 container init 6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.631463334 +0000 UTC m=+0.115479182 container start 6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cannon, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.537548987 +0000 UTC m=+0.021564855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:18:52 np0005549474 recursing_cannon[281148]: 167 167
Dec  7 05:18:52 np0005549474 systemd[1]: libpod-6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf.scope: Deactivated successfully.
Dec  7 05:18:52 np0005549474 conmon[281148]: conmon 6a42b65ac2d50d65fcc0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf.scope/container/memory.events
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.638157876 +0000 UTC m=+0.122173754 container attach 6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.638441993 +0000 UTC m=+0.122457841 container died 6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:18:52 np0005549474 systemd[1]: var-lib-containers-storage-overlay-52441572aa0fa9656477d60ef12418d6e4384ac6e568bed9d13ae359c5d18eb3-merged.mount: Deactivated successfully.
Dec  7 05:18:52 np0005549474 podman[281125]: 2025-12-07 10:18:52.669475885 +0000 UTC m=+0.153491733 container remove 6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cannon, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:18:52 np0005549474 systemd[1]: libpod-conmon-6a42b65ac2d50d65fcc00abd3f22547fd6386eca75a4eb59989c2f921b897ccf.scope: Deactivated successfully.
Dec  7 05:18:52 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: session ls {prefix=session ls} (starting...)
Dec  7 05:18:52 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:18:52 np0005549474 podman[281211]: 2025-12-07 10:18:52.823674166 +0000 UTC m=+0.047985552 container create 9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:52 np0005549474 systemd[1]: Started libpod-conmon-9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677.scope.
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16821 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 podman[281211]: 2025-12-07 10:18:52.799781569 +0000 UTC m=+0.024092965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:18:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:18:52 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: status {prefix=status} (starting...)
Dec  7 05:18:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7dbcd10fdc04fa1462f5433574148a4a9b4e556cb2c15a0b6283e7a2f330fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7dbcd10fdc04fa1462f5433574148a4a9b4e556cb2c15a0b6283e7a2f330fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7dbcd10fdc04fa1462f5433574148a4a9b4e556cb2c15a0b6283e7a2f330fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:52 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7dbcd10fdc04fa1462f5433574148a4a9b4e556cb2c15a0b6283e7a2f330fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:18:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25943 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:52 np0005549474 podman[281211]: 2025-12-07 10:18:52.929836815 +0000 UTC m=+0.154148201 container init 9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_colden, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 05:18:52 np0005549474 podman[281211]: 2025-12-07 10:18:52.936483916 +0000 UTC m=+0.160795282 container start 9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:18:52 np0005549474 podman[281211]: 2025-12-07 10:18:52.939450296 +0000 UTC m=+0.163761692 container attach 9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_colden, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  7 05:18:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87309197' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  7 05:18:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 675 B/s rd, 0 op/s
Dec  7 05:18:53 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26278 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773047032' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3917696844' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  7 05:18:53 np0005549474 lvm[281399]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:18:53 np0005549474 lvm[281399]: VG ceph_vg0 finished
Dec  7 05:18:53 np0005549474 optimistic_colden[281231]: {}
Dec  7 05:18:53 np0005549474 podman[281211]: 2025-12-07 10:18:53.69961343 +0000 UTC m=+0.923924806 container died 9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:18:53 np0005549474 systemd[1]: libpod-9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677.scope: Deactivated successfully.
Dec  7 05:18:53 np0005549474 systemd[1]: libpod-9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677.scope: Consumed 1.138s CPU time.
Dec  7 05:18:53 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26290 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997670805' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  7 05:18:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0b7dbcd10fdc04fa1462f5433574148a4a9b4e556cb2c15a0b6283e7a2f330fa-merged.mount: Deactivated successfully.
Dec  7 05:18:53 np0005549474 podman[281211]: 2025-12-07 10:18:53.918135806 +0000 UTC m=+1.142447172 container remove 9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152797592' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:18:53 np0005549474 systemd[1]: libpod-conmon-9eed3545130dcf9efe2c9d81392ae9dbd0ef477b5dc941abc2e8246077ffa677.scope: Deactivated successfully.
Dec  7 05:18:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  7 05:18:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:18:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:54.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:18:54 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16881 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T10:18:54.296+0000 7f2c9a7e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:18:54 np0005549474 ceph-mgr[74811]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:18:54 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.25997 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T10:18:54.319+0000 7f2c9a7e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:18:54 np0005549474 ceph-mgr[74811]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982302260' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2007917552' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  7 05:18:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323889937' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  7 05:18:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26329 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:55 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T10:18:55.065+0000 7f2c9a7e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:18:55 np0005549474 ceph-mgr[74811]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:18:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  7 05:18:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348909976' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  7 05:18:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26042 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec  7 05:18:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  7 05:18:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175056470' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  7 05:18:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:18:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:18:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26066 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  7 05:18:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2991316428' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:18:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:18:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:18:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:18:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26084 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26380 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  7 05:18:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295760253' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  7 05:18:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26114 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  7 05:18:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502013220' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 2039808 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.289008 2 0.000160
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.291579 0 0.000000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=138/97 les/c/f=139/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004163 4 0.000248
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=138/97 les/c/f=139/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=138/97 les/c/f=139/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 pg_epoch: 139 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=138/139 n=2 ec=57/42 lis/c=138/97 les/c/f=139/98/0 sis=138) [0] r=0 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fca81000/0x0/0x4ffc00000, data 0xf1461/0x19a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 893672 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 2031616 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 2023424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 2023424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 1998848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf54ef/0x1a0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 1998848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895834 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 1990656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 1990656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 1974272 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.970483780s of 10.779960632s, submitted: 34
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 1957888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fca71000/0x0/0x4ffc00000, data 0xfb69b/0x1a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 1949696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912756 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 1933312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fca69000/0x0/0x4ffc00000, data 0xff791/0x1af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fca69000/0x0/0x4ffc00000, data 0xff791/0x1af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 1867776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 1859584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 1859584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 145 handle_osd_map epochs [147,148], i have 145, src has [1,148]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 145 handle_osd_map epochs [146,148], i have 145, src has [1,148]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 548864 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 548864 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 548864 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 516096 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73bc400 session 0x55c4e619cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e5588000 session 0x55c4e6229860
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7fa2f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920250 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 450560 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca63000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.405124664s of 41.443374634s, submitted: 29
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918534 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918682 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 401408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 393216 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 393216 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.506690979s of 10.310895920s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 385024 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918514 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 376832 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 368640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,0,0,1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 352256 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918550 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.192948818s of 10.859099388s, submitted: 8
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 262144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 253952 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 253952 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 131072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 40960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 32768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 32768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 32768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 0 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73be000 session 0x55c4e62585a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e62570e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 81.228744507s of 81.289070129s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918550 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 892928 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 892928 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918534 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 868352 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918534 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.499966621s of 15.042757988s, submitted: 9
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e531d2c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 778240 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 778240 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918402 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 753664 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554f000 session 0x55c4e78ff4a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 753664 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.034523010s of 16.038230896s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 737280 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920062 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 737280 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 737280 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 712704 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920062 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1728512 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 1728512 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.635488510s of 12.758879662s, submitted: 9
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920194 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1720320 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1671168 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1671168 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1630208 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1630208 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921574 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1613824 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 1613824 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920967 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 1597440 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 1597440 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 1597440 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.523270607s of 14.556501389s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920835 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1556480 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1556480 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1548288 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1548288 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1548288 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920835 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26392 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7c9cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1523712 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920835 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1523712 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920835 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1499136 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1499136 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 1490944 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.976387024s of 19.980865479s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 1490944 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920967 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 1490944 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1482752 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924007 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6888 writes, 29K keys, 6888 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6888 writes, 1196 syncs, 5.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6888 writes, 29K keys, 6888 commit groups, 1.0 writes per commit group, ingest: 20.43 MB, 0.03 MB/s#012Interval WAL: 6888 writes, 1196 syncs, 5.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1474560 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1474560 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924007 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.216524124s of 12.298725128s, submitted: 12
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554e800 session 0x55c4e531cb40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923400 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1449984 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1449984 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1441792 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1441792 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1441792 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923268 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1433600 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1433600 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1433600 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1425408 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1417216 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923268 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1417216 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.449990273s of 14.504862785s, submitted: 2
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1409024 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1409024 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1400832 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1400832 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923416 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1400832 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e531d0e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1392640 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1392640 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1384448 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923416 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.758780479s of 11.850454330s, submitted: 8
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e5588000 session 0x55c4e84543c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1359872 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923116 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1359872 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1359872 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 1343488 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922825 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1327104 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 1327104 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.393840790s of 12.173091888s, submitted: 6
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922957 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1310720 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922957 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924321 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.504771233s of 11.625728607s, submitted: 14
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1171456 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1171456 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 1171456 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1155072 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e79b30e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 1155072 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1146880 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1146880 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 1146880 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1138688 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1138688 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 1138688 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 1130496 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924189 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 1130496 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1122304 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1122304 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.431320190s of 36.434822083s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 1122304 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 1114112 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924321 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 1114112 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1105920 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1105920 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 1105920 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 1097728 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925849 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 1089536 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 1089536 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1081344 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.004766464s of 10.112820625s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1081344 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 1081344 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925090 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1073152 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 1073152 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1064960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 1064960 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.604534149s of 56.608047485s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 958464 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554f000 session 0x55c4e7f2a780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,1,1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925182 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 1769472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925110 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 1736704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.513541222s of 11.403853416s, submitted: 218
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 1728512 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926770 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926011 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926163 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.836187363s of 16.871778488s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e8370b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926031 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926031 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926031 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554e800 session 0x55c4e7f2a1e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.289003372s of 16.371223450s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926163 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 1712128 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927691 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.128295898s of 10.147748947s, submitted: 5
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927823 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 1703936 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 1695744 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930715 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 1687552 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 1654784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 1654784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930699 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 1654784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.035066605s of 12.087267876s, submitted: 15
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 1654784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 1654784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 1654784 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73be000 session 0x55c4e83841e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929976 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e7c512c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.480339050s of 99.554061890s, submitted: 2
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930108 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 1630208 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931636 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 1646592 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931768 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.082564354s of 12.137281418s, submitted: 9
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931045 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 1638400 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930286 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.163699150s of 11.219295502s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7fa2000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 53.619079590s of 53.630729675s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930454 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931966 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931966 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.790068626s of 13.823327065s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931666 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e8370b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.882770538s of 37.894630432s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931950 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933478 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.014238358s of 12.047296524s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932719 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554e800 session 0x55c4e7f2ab40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e619cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.964462280s of 27.972938538s, submitted: 2
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933003 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934515 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.966125488s of 11.298370361s, submitted: 13
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935436 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554f000 session 0x55c4e6726780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7c9f680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.646772385s of 26.672891617s, submitted: 4
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935284 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1409024 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934861 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.061220169s of 12.097341537s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934861 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e73e85a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.280908585s of 28.311281204s, submitted: 8
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 1392640 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934729 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 1392640 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 1392640 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934729 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934729 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.471632004s of 16.617654800s, submitted: 9
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e71983c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread fragmentation_score=0.000029 took=0.000052s
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7653 writes, 30K keys, 7653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7653 writes, 1575 syncs, 4.86 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 765 writes, 1338 keys, 765 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 765 writes, 379 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1335296 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.645007133s of 15.648278236s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934713 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936241 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936241 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 212992 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.669661522s of 12.726916313s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935941 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 180224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e558b000 session 0x55c4e71981e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936093 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936093 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.996849060s of 18.999956131s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936225 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936241 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936994 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.219687462s of 13.265237808s, submitted: 12
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e54bd680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.302152634s of 18.305004120s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937162 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 212992 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 188416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 172032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938542 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 172032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.008382797s of 10.192886353s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554e800 session 0x55c4e73efc20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937951 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937951 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.604965210s of 13.608486176s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938067 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e558b000 session 0x55c4e6229e00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.548464775s of 14.559932709s, submitted: 4
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941091 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 73728 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e794ef00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 73728 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 73728 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.014322281s of 15.062206268s, submitted: 15
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940959 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 57344 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 57344 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940959 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941091 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.579513550s of 12.591442108s, submitted: 3
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 16384 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 16384 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 16384 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941107 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 8192 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 8192 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 0 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940348 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939909 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.942881584s of 13.975779533s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e7466000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939777 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939777 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.855612755s of 10.858253479s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 983040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 770048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 770048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939925 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939925 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e8336d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939925 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.921133041s of 17.504179001s, submitted: 234
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939625 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939909 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942949 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.817281723s of 12.883481026s, submitted: 8
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 720896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 720896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 720896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942949 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942649 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e83854a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.492683411s of 31.511125565s, submitted: 6
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942933 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942949 data_alloc: 218103808 data_used: 98304
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942342 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.010691643s of 12.045362473s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554f000 session 0x55c4e82e70e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.260337830s of 119.267936707s, submitted: 2
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945385 data_alloc: 218103808 data_used: 102400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 720896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 151 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e79b2b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 18440192 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 18309120 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb568000/0x0/0x4ffc00000, data 0x11eba2f/0x12a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 152 ms_handle_reset con 0x55c4e5d9d400 session 0x55c4e531c5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fb564000/0x0/0x4ffc00000, data 0x11edb37/0x12a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073756 data_alloc: 218103808 data_used: 110592
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fb564000/0x0/0x4ffc00000, data 0x11edb37/0x12a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7ca0d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076318 data_alloc: 218103808 data_used: 110592
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb561000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 18284544 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 18284544 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.473200798s of 12.728899956s, submitted: 91
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075494 data_alloc: 218103808 data_used: 106496
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075626 data_alloc: 218103808 data_used: 106496
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 18268160 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 18268160 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 18268160 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.053786278s of 11.139539719s, submitted: 6
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 18251776 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 18251776 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075494 data_alloc: 218103808 data_used: 106496
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 18251776 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 18235392 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 18235392 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 18235392 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 18227200 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076990 data_alloc: 218103808 data_used: 110592
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 18227200 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 18227200 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076858 data_alloc: 218103808 data_used: 110592
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7199a40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e7475a40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e531cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 18210816 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.845787048s of 14.884376526s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e73ba800 session 0x55c4e54bd680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e558b000 session 0x55c4e794f2c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 6709248 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 6709248 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e89d05a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107106 data_alloc: 234881024 data_used: 11579392
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 6709248 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 6692864 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e89d0960
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e89d0d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e73bf800 session 0x55c4e89d0f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e558b000 session 0x55c4e89d12c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e89d1680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 6594560 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb553000/0x0/0x4ffc00000, data 0x11f9d45/0x12b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 6594560 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118732 data_alloc: 234881024 data_used: 11579392
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb553000/0x0/0x4ffc00000, data 0x11f9d45/0x12b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb553000/0x0/0x4ffc00000, data 0x11f9d45/0x12b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 6545408 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.737378120s of 10.802026749s, submitted: 19
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 6520832 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120694 data_alloc: 234881024 data_used: 11603968
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120694 data_alloc: 234881024 data_used: 11603968
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.648746490s of 11.656969070s, submitted: 15
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120862 data_alloc: 234881024 data_used: 11599872
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 1916928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 2146304 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d98000/0x0/0x4ffc00000, data 0x180dd17/0x18cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168960 data_alloc: 234881024 data_used: 11628544
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d98000/0x0/0x4ffc00000, data 0x180dd17/0x18cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9d000/0x0/0x4ffc00000, data 0x1810d17/0x18cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9d000/0x0/0x4ffc00000, data 0x1810d17/0x18cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165024 data_alloc: 234881024 data_used: 11628544
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9d000/0x0/0x4ffc00000, data 0x1810d17/0x18cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.746442795s of 12.949378967s, submitted: 59
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9c000/0x0/0x4ffc00000, data 0x1811d17/0x18d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9c000/0x0/0x4ffc00000, data 0x1811d17/0x18d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165248 data_alloc: 234881024 data_used: 11628544
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9c000/0x0/0x4ffc00000, data 0x1811d17/0x18d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e7b803c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c80c00 session 0x55c4e7c9d0e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8dc00 session 0x55c4e619c000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e74661e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7f2ab40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e6246000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e61f2b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 1220608 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c80c00 session 0x55c4e619cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e83843c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e531c780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e75ee1e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e8444f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183551 data_alloc: 234881024 data_used: 12087296
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9c22000/0x0/0x4ffc00000, data 0x198bd17/0x1a4a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c15400 session 0x55c4e7f2a5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7628b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183551 data_alloc: 234881024 data_used: 12087296
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7c9cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.767366409s of 12.866190910s, submitted: 26
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7951860
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104333312 unmapped: 3637248 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 4186112 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 3981312 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188841 data_alloc: 234881024 data_used: 12091392
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [1,0,0,1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfc000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 3915776 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190673 data_alloc: 234881024 data_used: 12087296
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfc000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 3915776 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.556396484s of 12.604061127s, submitted: 14
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204693 data_alloc: 234881024 data_used: 12136448
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105209856 unmapped: 3809280 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9beb000/0x0/0x4ffc00000, data 0x19c1d27/0x1a81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 3637248 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9a86000/0x0/0x4ffc00000, data 0x1b1dd27/0x1bdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3620864 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3620864 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9a12000/0x0/0x4ffc00000, data 0x1b99d27/0x1c59000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 3604480 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213064 data_alloc: 234881024 data_used: 12189696
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 3604480 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 3604480 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x1bbad27/0x1c7a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212024 data_alloc: 234881024 data_used: 12189696
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.053034782s of 12.220693588s, submitted: 45
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e8454f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2d000 session 0x55c4e6246d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e73f0000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x1bbad27/0x1c7a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9b000/0x0/0x4ffc00000, data 0x1812d17/0x18d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172459 data_alloc: 234881024 data_used: 11890688
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9b000/0x0/0x4ffc00000, data 0x1812d17/0x18d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9b000/0x0/0x4ffc00000, data 0x1812d17/0x18d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e89d1a40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7947680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.848108292s of 30.943441391s, submitted: 31
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e89ca5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa033000/0x0/0x4ffc00000, data 0x157bd07/0x1639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f000 session 0x55c4e84452c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f000 session 0x55c4e7472780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155793 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e82e7a40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e79b23c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7c9c000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa033000/0x0/0x4ffc00000, data 0x157bd07/0x1639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa032000/0x0/0x4ffc00000, data 0x157bd17/0x163a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182383 data_alloc: 234881024 data_used: 15511552
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa032000/0x0/0x4ffc00000, data 0x157bd17/0x163a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.281540871s of 12.335005760s, submitted: 7
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2777088 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182011 data_alloc: 234881024 data_used: 15511552
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2777088 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2777088 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 2768896 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 2768896 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f976d000/0x0/0x4ffc00000, data 0x1e40d17/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 5152768 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248907 data_alloc: 234881024 data_used: 15556608
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9733000/0x0/0x4ffc00000, data 0x1e74d17/0x1f33000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 3776512 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 3776512 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 3776512 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243260 data_alloc: 234881024 data_used: 15556608
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.510517120s of 15.721278191s, submitted: 62
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243280 data_alloc: 234881024 data_used: 15560704
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243280 data_alloc: 234881024 data_used: 15560704
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554e800 session 0x55c4e71992c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243280 data_alloc: 234881024 data_used: 15560704
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e7ba1e00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e7c510e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.520026207s of 12.524559021s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e54bcf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132603 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132735 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.115133286s of 10.144872665s, submitted: 10
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 6660096 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 6660096 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 6660096 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134263 data_alloc: 234881024 data_used: 11837440
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134263 data_alloc: 234881024 data_used: 11837440
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.573747635s of 11.605253220s, submitted: 9
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133963 data_alloc: 234881024 data_used: 11837440
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 9994240 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7946b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172001 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166fd07/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e83852c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166fd07/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 13369344 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 13369344 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196357 data_alloc: 234881024 data_used: 15122432
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 12320768 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196357 data_alloc: 234881024 data_used: 15122432
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.439218521s of 21.482173920s, submitted: 13
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237173 data_alloc: 234881024 data_used: 15175680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 10264576 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9937000/0x0/0x4ffc00000, data 0x1c6ed07/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x1c6fd07/0x1d2d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e7c512c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252145 data_alloc: 234881024 data_used: 15360000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x1c6fd07/0x1d2d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247489 data_alloc: 234881024 data_used: 15360000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f991e000/0x0/0x4ffc00000, data 0x1c90d07/0x1d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f991e000/0x0/0x4ffc00000, data 0x1c90d07/0x1d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.988185883s of 13.187009811s, submitted: 63
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247593 data_alloc: 234881024 data_used: 15360000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9915000/0x0/0x4ffc00000, data 0x1c99d07/0x1d57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e84445a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 11190272 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 11190272 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272861 data_alloc: 234881024 data_used: 15360000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 11190272 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f966b000/0x0/0x4ffc00000, data 0x1f43d07/0x2001000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 11182080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 11182080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 9158 writes, 34K keys, 9158 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9158 writes, 2255 syncs, 4.06 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1505 writes, 4088 keys, 1505 commit groups, 1.0 writes per commit group, ingest: 3.55 MB, 0.01 MB/s#012Interval WAL: 1505 writes, 680 syncs, 2.21 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 11182080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.118002892s of 11.166302681s, submitted: 14
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e5c7cb40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 11616256 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f966b000/0x0/0x4ffc00000, data 0x1f43d07/0x2001000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274889 data_alloc: 234881024 data_used: 15360000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 11616256 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109551616 unmapped: 11730944 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110747648 unmapped: 10534912 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9647000/0x0/0x4ffc00000, data 0x1f67d07/0x2025000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110747648 unmapped: 10534912 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 10526720 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292521 data_alloc: 234881024 data_used: 17858560
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 10379264 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9647000/0x0/0x4ffc00000, data 0x1f67d07/0x2025000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 10362880 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 10362880 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292725 data_alloc: 234881024 data_used: 17858560
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.492300034s of 12.506669044s, submitted: 4
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9647000/0x0/0x4ffc00000, data 0x1f67d07/0x2025000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 4939776 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9144000/0x0/0x4ffc00000, data 0x246ad07/0x2528000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 4243456 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9107000/0x0/0x4ffc00000, data 0x24a7d07/0x2565000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 4210688 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342291 data_alloc: 234881024 data_used: 18628608
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 4079616 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 4079616 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 4079616 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9107000/0x0/0x4ffc00000, data 0x24a7d07/0x2565000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 4046848 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115367936 unmapped: 5914624 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340323 data_alloc: 234881024 data_used: 18628608
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 5775360 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2c400 session 0x55c4e7b9eb40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c28000 session 0x55c4e67243c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 5775360 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.908575058s of 10.101375580s, submitted: 82
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c28000 session 0x55c4e73e85a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9912000/0x0/0x4ffc00000, data 0x1c9cd07/0x1d5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9912000/0x0/0x4ffc00000, data 0x1c9cd07/0x1d5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253951 data_alloc: 234881024 data_used: 15360000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7944960
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e73ee5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e7941860
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e7b80960
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e7ca3c20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7d88780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e6724780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.788125992s of 30.815643311s, submitted: 11
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 10158080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e6256780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c28000 session 0x55c4e82e6000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7d89c20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7624f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e73ef680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9ff6000/0x0/0x4ffc00000, data 0x15b6d79/0x1676000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188942 data_alloc: 234881024 data_used: 11841536
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e83361e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e794e000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e73f0d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7629c20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111525888 unmapped: 14041088 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 13975552 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15b6dac/0x1678000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197713 data_alloc: 234881024 data_used: 12099584
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 13975552 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15b6dac/0x1678000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208505 data_alloc: 234881024 data_used: 13713408
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.291709900s of 17.377859116s, submitted: 43
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282921 data_alloc: 234881024 data_used: 14348288
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1eaddac/0x1f6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 10297344 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 8699904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 8699904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 8699904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f965b000/0x0/0x4ffc00000, data 0x1f40dac/0x2002000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f965b000/0x0/0x4ffc00000, data 0x1f40dac/0x2002000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 8691712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302963 data_alloc: 234881024 data_used: 14471168
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 8691712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 8691712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 9060352 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 9060352 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9649000/0x0/0x4ffc00000, data 0x1f61dac/0x2023000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 9060352 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294123 data_alloc: 234881024 data_used: 14483456
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.663401604s of 11.935150146s, submitted: 126
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9642000/0x0/0x4ffc00000, data 0x1f68dac/0x202a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 9625600 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294379 data_alloc: 234881024 data_used: 14483456
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 9625600 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9642000/0x0/0x4ffc00000, data 0x1f68dac/0x202a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 9625600 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f963f000/0x0/0x4ffc00000, data 0x1f6bdac/0x202d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f963f000/0x0/0x4ffc00000, data 0x1f6bdac/0x202d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295531 data_alloc: 234881024 data_used: 14512128
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 9609216 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.539818764s of 11.554802895s, submitted: 4
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9631000/0x0/0x4ffc00000, data 0x1f79dac/0x203b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9631000/0x0/0x4ffc00000, data 0x1f79dac/0x203b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296099 data_alloc: 234881024 data_used: 14512128
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 7389184 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2bc00 session 0x55c4e7b9f2c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21400 session 0x55c4e62474a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c17c00 session 0x55c4e7940780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7b9f4a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e74754a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9631000/0x0/0x4ffc00000, data 0x1f79dac/0x203b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323210 data_alloc: 234881024 data_used: 14512128
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92b1000/0x0/0x4ffc00000, data 0x22f9dac/0x23bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92b1000/0x0/0x4ffc00000, data 0x22f9dac/0x23bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324050 data_alloc: 234881024 data_used: 14512128
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.159543991s of 13.274734497s, submitted: 21
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 9478144 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 9437184 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 7208960 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 7208960 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348510 data_alloc: 234881024 data_used: 18182144
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92ae000/0x0/0x4ffc00000, data 0x22fcdac/0x23be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92ae000/0x0/0x4ffc00000, data 0x22fcdac/0x23be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349270 data_alloc: 234881024 data_used: 18247680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 7151616 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7102464 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7102464 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.494841576s of 11.516628265s, submitted: 5
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120193024 unmapped: 5373952 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f7a000/0x0/0x4ffc00000, data 0x2630dac/0x26f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120193024 unmapped: 5373952 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380476 data_alloc: 234881024 data_used: 18259968
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 5316608 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 5316608 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 5267456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 5267456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f6b000/0x0/0x4ffc00000, data 0x263fdac/0x2701000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 5251072 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379860 data_alloc: 234881024 data_used: 18259968
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 5251072 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f6b000/0x0/0x4ffc00000, data 0x263fdac/0x2701000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 5218304 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379964 data_alloc: 234881024 data_used: 18259968
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f66000/0x0/0x4ffc00000, data 0x2644dac/0x2706000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.125692368s of 14.222403526s, submitted: 26
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21400 session 0x55c4e7940000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118210560 unmapped: 7356416 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2bc00 session 0x55c4e84443c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f000 session 0x55c4e7c9f4a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307424 data_alloc: 234881024 data_used: 14561280
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1f8ddac/0x204f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1f8ddac/0x204f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1f8ddac/0x204f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307424 data_alloc: 234881024 data_used: 14561280
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73e8d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e73ec1e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75ee960
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b7000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.017325401s of 12.180756569s, submitted: 72
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166819 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f800 session 0x55c4e4fa34a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554ec00 session 0x55c4e4fa3680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e608c000 session 0x55c4e5c7c3c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166835 data_alloc: 234881024 data_used: 10002432
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e6247a40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.982777596s of 10.994009018s, submitted: 4
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166667 data_alloc: 234881024 data_used: 10002432
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 9961472 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [1])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166687 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7198f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21000 session 0x55c4e73e8b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e74741e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.336001396s of 10.008099556s, submitted: 246
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e73e9e00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73ef0e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187036 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e67272c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07400 session 0x55c4e61f25a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75ee000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e75efa40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 10731520 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191498 data_alloc: 234881024 data_used: 10010624
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 10723328 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x1402d17/0x14c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x1402d17/0x14c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x1402d17/0x14c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197426 data_alloc: 234881024 data_used: 10801152
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.307585716s of 11.336967468s, submitted: 8
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73e8f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e73ee5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bb400 session 0x55c4e73f0b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f84000/0x0/0x4ffc00000, data 0x1219d17/0x12d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f84000/0x0/0x4ffc00000, data 0x1219d17/0x12d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171331 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171331 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171331 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7629680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e76294a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e761e5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 11771904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e7bd4000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.225736618s of 16.294603348s, submitted: 21
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671fc00 session 0x55c4e7bd4d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e74663c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e61f30e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73f12c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e89ca960
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f993c000/0x0/0x4ffc00000, data 0x1861d17/0x1920000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224175 data_alloc: 234881024 data_used: 10006528
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f993c000/0x0/0x4ffc00000, data 0x1861d17/0x1920000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e4f800 session 0x55c4e7625680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e83374a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 14319616 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226187 data_alloc: 234881024 data_used: 10010624
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 14295040 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9918000/0x0/0x4ffc00000, data 0x1885d17/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 11812864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 11812864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 11804672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7b7de00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7ca8f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 11804672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.655078888s of 12.087609291s, submitted: 13
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7940780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178121 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178121 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.407890320s of 18.439193726s, submitted: 12
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: mgrc ms_handle_reset ms_handle_reset con 0x55c4e51e6000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2113101694
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2113101694,v1:192.168.122.100:6801/2113101694]
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: mgrc handle_mgr_configure stats_period=5
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e8455680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7940b40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e7d88000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e61f2f00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c81c00 session 0x55c4e76252c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 15122432 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 15122432 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 15122432 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e79423c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e6727c20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e61f3860
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e761f680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.496601105s of 37.500679016s, submitted: 1
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 14606336 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 13770752 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 13729792 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f98bb000/0x0/0x4ffc00000, data 0x18e3d07/0x19a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231833 data_alloc: 234881024 data_used: 9707520
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f98b3000/0x0/0x4ffc00000, data 0x18ebd07/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231833 data_alloc: 234881024 data_used: 9707520
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f98b3000/0x0/0x4ffc00000, data 0x18ebd07/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e608cc00 session 0x55c4e7c972c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231833 data_alloc: 234881024 data_used: 9707520
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.041767120s of 15.136543274s, submitted: 32
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75ef4a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.009258270s of 20.026557922s, submitted: 6
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e619cf00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235495 data_alloc: 234881024 data_used: 9547776
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 17227776 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 16539648 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283375 data_alloc: 234881024 data_used: 14761984
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 16539648 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 16539648 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 16523264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 16523264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 16523264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283375 data_alloc: 234881024 data_used: 14761984
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 16883712 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7940d20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c0e800 session 0x55c4e7941680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bac00 session 0x55c4e79403c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 16859136 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7941e00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.217409134s of 16.274227142s, submitted: 9
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7b9f680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7b7de00
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c0e800 session 0x55c4e7473680
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c23400 session 0x55c4e74725a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c23400 session 0x55c4e89ca960
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 17752064 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 17752064 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f93e2000/0x0/0x4ffc00000, data 0x1dbbd17/0x1e7a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 16687104 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370949 data_alloc: 234881024 data_used: 14852096
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 15843328 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 15843328 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 15712256 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2458d17/0x2517000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 15712256 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 12427264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402519 data_alloc: 234881024 data_used: 18534400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 123674624 unmapped: 12394496 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8d24000/0x0/0x4ffc00000, data 0x2479d17/0x2538000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398031 data_alloc: 234881024 data_used: 18534400
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.642805099s of 14.886832237s, submitted: 69
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8d21000/0x0/0x4ffc00000, data 0x247cd17/0x253b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411793 data_alloc: 234881024 data_used: 18571264
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 13615104 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122634240 unmapped: 13434880 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 13393920 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 13393920 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a6c000/0x0/0x4ffc00000, data 0x2731d17/0x27f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422455 data_alloc: 234881024 data_used: 18698240
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a69000/0x0/0x4ffc00000, data 0x2734d17/0x27f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422455 data_alloc: 234881024 data_used: 18698240
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a69000/0x0/0x4ffc00000, data 0x2734d17/0x27f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e89ca5a0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.802055359s of 17.920734406s, submitted: 34
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122773504 unmapped: 13295616 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a63000/0x0/0x4ffc00000, data 0x273ad17/0x27f9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bf000 session 0x55c4e7625c20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341159 data_alloc: 234881024 data_used: 14848000
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f91ab000/0x0/0x4ffc00000, data 0x1fefd07/0x20ad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e75eed20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e95e9400 session 0x55c4e73e90e0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 20553728 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 20553728 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 20553728 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.277734756s of 26.333208084s, submitted: 21
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75eeb40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270812 data_alloc: 218103808 data_used: 7647232
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e7b9f2c0
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347057 data_alloc: 234881024 data_used: 18173952
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347057 data_alloc: 234881024 data_used: 18173952
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.736631393s of 17.821859360s, submitted: 18
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 18448384 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 18325504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416075 data_alloc: 234881024 data_used: 18780160
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 19308544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417899 data_alloc: 234881024 data_used: 19128320
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417899 data_alloc: 234881024 data_used: 19128320
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 19283968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 19283968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c23400 session 0x55c4e7943c20
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.364301682s of 16.526765823s, submitted: 69
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bf000 session 0x55c4e7ca8780
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417767 data_alloc: 234881024 data_used: 19128320
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2a000 session 0x55c4e73e9a40
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'config diff' '{prefix=config diff}'
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'config show' '{prefix=config show}'
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 26034176 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'counter dump' '{prefix=counter dump}'
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'counter schema' '{prefix=counter schema}'
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 26664960 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26566656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:18:56 np0005549474 ceph-osd[83033]: do_command 'log dump' '{prefix=log dump}'
Dec  7 05:18:56 np0005549474 nova_compute[256753]: 2025-12-07 10:18:56.771 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.16986 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213981585' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:57.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17010 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26150 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec  7 05:18:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279388824' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17022 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  7 05:18:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180362073' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17049 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26198 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:18:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26216 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26461 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17097 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  7 05:18:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306248426' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  7 05:18:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:58.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:18:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:18:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:18:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26485 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17103 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26237 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec  7 05:18:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824909141' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:18:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:18:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:18:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26497 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17118 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec  7 05:18:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358336022' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26515 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  7 05:18:59 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085620205' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  7 05:18:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:59] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:18:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:18:59] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040182988' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  7 05:19:00 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26536 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:00.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3050173386' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996380930' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329368812' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec  7 05:19:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797649901' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  7 05:19:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093201090' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045264146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  7 05:19:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:01 np0005549474 nova_compute[256753]: 2025-12-07 10:19:01.773 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:01 np0005549474 nova_compute[256753]: 2025-12-07 10:19:01.775 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:01 np0005549474 nova_compute[256753]: 2025-12-07 10:19:01.775 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:19:01 np0005549474 nova_compute[256753]: 2025-12-07 10:19:01.775 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:01 np0005549474 nova_compute[256753]: 2025-12-07 10:19:01.813 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:01 np0005549474 nova_compute[256753]: 2025-12-07 10:19:01.814 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/477203890' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 05:19:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526849083' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 05:19:02 np0005549474 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  7 05:19:02 np0005549474 systemd[1]: Starting Hostname Service...
Dec  7 05:19:02 np0005549474 systemd[1]: Started Hostname Service.
Dec  7 05:19:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:02.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641506262' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/878584831' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650766654' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  7 05:19:02 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26378 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:02 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17274 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610642075' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  7 05:19:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:02 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26387 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17307 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17298 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26405 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26420 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17328 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26659 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17358 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec  7 05:19:03 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1993314288' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26677 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26683 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:04.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec  7 05:19:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550008820' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26701 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26480 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:04 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec  7 05:19:04 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455888881' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17430 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26498 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26734 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25312416' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:05.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26504 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  7 05:19:05 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  7 05:19:05 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  7 05:19:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:06 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26770 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  7 05:19:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  7 05:19:06 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:06 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec  7 05:19:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317557144' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  7 05:19:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  7 05:19:06 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  7 05:19:06 np0005549474 nova_compute[256753]: 2025-12-07 10:19:06.813 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:07 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17595 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:07.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:07 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26624 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec  7 05:19:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531597163' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  7 05:19:07 np0005549474 podman[283667]: 2025-12-07 10:19:07.647499634 +0000 UTC m=+0.066480914 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  7 05:19:07 np0005549474 podman[283711]: 2025-12-07 10:19:07.74728115 +0000 UTC m=+0.092358866 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  7 05:19:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec  7 05:19:07 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197806118' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec  7 05:19:08 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26884 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec  7 05:19:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476229112' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec  7 05:19:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec  7 05:19:08 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534413216' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec  7 05:19:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:08.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:09 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17652 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:09 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26690 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:09 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec  7 05:19:09 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4052365325' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec  7 05:19:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:09] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:19:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:09] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:19:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:19:09 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6986 writes, 31K keys, 6986 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 6986 writes, 6986 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1568 writes, 6995 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 11.82 MB, 0.02 MB/s#012Interval WAL: 1568 writes, 1568 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.8      0.73              0.14        17    0.043       0      0       0.0       0.0#012  L6      1/0   12.19 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.6     91.6     79.0      2.65              0.61        16    0.166     88K   8802       0.0       0.0#012 Sum      1/0   12.19 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.6     71.9     75.5      3.38              0.75        33    0.102     88K   8802       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    102.5    100.9      0.64              0.23         8    0.080     26K   2580       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     91.6     79.0      2.65              0.61        16    0.166     88K   8802       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     63.2      0.72              0.14        16    0.045       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.5      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.25 GB write, 0.11 MB/s write, 0.24 GB read, 0.10 MB/s read, 3.4 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5637d9ea7350#2 capacity: 304.00 MB usage: 21.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000162 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1358,20.97 MB,6.89731%) FilterBlock(34,256.36 KB,0.0823523%) IndexBlock(34,446.14 KB,0.143317%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 05:19:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec  7 05:19:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1962016154' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec  7 05:19:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:10.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:10 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26917 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:10 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17685 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:10 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec  7 05:19:10 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287580869' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec  7 05:19:10 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26726 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:11 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17697 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec  7 05:19:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/454064808' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec  7 05:19:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:11 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17712 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:11 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26747 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:11 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26947 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:11 np0005549474 nova_compute[256753]: 2025-12-07 10:19:11.816 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:11 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec  7 05:19:11 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/403766513' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26756 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:12.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec  7 05:19:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803495527' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec  7 05:19:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:19:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:19:12 np0005549474 ovs-appctl[284887]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:19:12 np0005549474 ovs-appctl[284895]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  7 05:19:12 np0005549474 ovs-appctl[284908]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26962 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:12 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17745 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26968 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17757 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 podman[285141]: 2025-12-07 10:19:13.2669652 +0000 UTC m=+0.079508856 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26774 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec  7 05:19:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658715272' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26786 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:13 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec  7 05:19:13 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639348985' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17784 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17787 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:14.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17793 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26995 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:14 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26813 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:14 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  7 05:19:14 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499075912' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  7 05:19:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:15 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26819 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec  7 05:19:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585056831' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec  7 05:19:15 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27022 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec  7 05:19:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20433843' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec  7 05:19:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:16 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27028 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:19:16 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17835 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:16.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:16 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec  7 05:19:16 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1693107361' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  7 05:19:16 np0005549474 nova_compute[256753]: 2025-12-07 10:19:16.817 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:17 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26849 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979849283' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec  7 05:19:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:17.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691614735' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:17 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27067 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec  7 05:19:17 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876763443' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec  7 05:19:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:18.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:18 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17889 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec  7 05:19:18 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846612334' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec  7 05:19:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236617164' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec  7 05:19:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:19 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26888 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.675867) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759675905, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2431, "num_deletes": 251, "total_data_size": 4264172, "memory_usage": 4337440, "flush_reason": "Manual Compaction"}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  7 05:19:19 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759706075, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4147469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29620, "largest_seqno": 32049, "table_properties": {"data_size": 4136048, "index_size": 7083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 28406, "raw_average_key_size": 21, "raw_value_size": 4111614, "raw_average_value_size": 3182, "num_data_blocks": 303, "num_entries": 1292, "num_filter_entries": 1292, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102560, "oldest_key_time": 1765102560, "file_creation_time": 1765102759, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 30558 microseconds, and 8098 cpu microseconds.
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.706423) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4147469 bytes OK
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.706545) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.708462) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.708488) EVENT_LOG_v1 {"time_micros": 1765102759708481, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.708508) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4253288, prev total WAL file size 4253288, number of live WAL files 2.
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.710582) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4050KB)], [65(12MB)]
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759710666, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16926160, "oldest_snapshot_seqno": -1}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6680 keys, 14767517 bytes, temperature: kUnknown
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759805329, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14767517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14723421, "index_size": 26320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 171517, "raw_average_key_size": 25, "raw_value_size": 14603972, "raw_average_value_size": 2186, "num_data_blocks": 1057, "num_entries": 6680, "num_filter_entries": 6680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102759, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.805754) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14767517 bytes
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.808704) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.3 rd, 155.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.2 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 7196, records dropped: 516 output_compression: NoCompression
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.808723) EVENT_LOG_v1 {"time_micros": 1765102759808714, "job": 36, "event": "compaction_finished", "compaction_time_micros": 94927, "compaction_time_cpu_micros": 27867, "output_level": 6, "num_output_files": 1, "total_output_size": 14767517, "num_input_records": 7196, "num_output_records": 6680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759809442, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759811365, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.710449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.811434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.811439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.811441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.811442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:19:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:19:19.811444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:19:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:19] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:19:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:19] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:19:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec  7 05:19:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059141403' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec  7 05:19:20 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27109 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:20 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17937 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:20 np0005549474 nova_compute[256753]: 2025-12-07 10:19:20.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:20 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26912 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:20 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17952 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec  7 05:19:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/645209801' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec  7 05:19:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:21 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27130 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:21 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26924 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec  7 05:19:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341636885' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec  7 05:19:21 np0005549474 nova_compute[256753]: 2025-12-07 10:19:21.818 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:21 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26930 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:21 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17991 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:22.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27148 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.17997 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec  7 05:19:22 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1507606085' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  7 05:19:22 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27154 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec  7 05:19:23 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361377205' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18027 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:23.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:23 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18033 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.26963 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27178 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:23 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18051 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:24.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27187 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 05:19:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668245142' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 05:19:24 np0005549474 systemd[1]: Starting Time & Date Service...
Dec  7 05:19:24 np0005549474 systemd[1]: Started Time & Date Service.
Dec  7 05:19:24 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27005 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:24 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec  7 05:19:24 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835152910' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec  7 05:19:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:19:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:25.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:19:25 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27211 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:25 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18081 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:25 np0005549474 nova_compute[256753]: 2025-12-07 10:19:25.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:25 np0005549474 nova_compute[256753]: 2025-12-07 10:19:25.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:25 np0005549474 nova_compute[256753]: 2025-12-07 10:19:25.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:19:25 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27223 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:19:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 05:19:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1679196041' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 05:19:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.801 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.802 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.802 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.802 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.803 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.830 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.833 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.833 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5014 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.834 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.874 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:26 np0005549474 nova_compute[256753]: 2025-12-07 10:19:26.875 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:27.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:19:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471426469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.303 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:19:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:19:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.471 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.472 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4253MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.472 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.473 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.555 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.555 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:19:27 np0005549474 nova_compute[256753]: 2025-12-07 10:19:27.586 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:19:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:19:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325873371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:19:28 np0005549474 nova_compute[256753]: 2025-12-07 10:19:28.052 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:19:28 np0005549474 nova_compute[256753]: 2025-12-07 10:19:28.061 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:19:28 np0005549474 nova_compute[256753]: 2025-12-07 10:19:28.095 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:19:28 np0005549474 nova_compute[256753]: 2025-12-07 10:19:28.098 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:19:28 np0005549474 nova_compute[256753]: 2025-12-07 10:19:28.098 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:19:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:29.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:29] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:19:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:29] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:19:30 np0005549474 nova_compute[256753]: 2025-12-07 10:19:30.094 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:31.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.804 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.805 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.805 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:19:31 np0005549474 nova_compute[256753]: 2025-12-07 10:19:31.875 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:32.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:34.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:35.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:36.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:36 np0005549474 nova_compute[256753]: 2025-12-07 10:19:36.878 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:36 np0005549474 nova_compute[256753]: 2025-12-07 10:19:36.880 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:36 np0005549474 nova_compute[256753]: 2025-12-07 10:19:36.880 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:19:36 np0005549474 nova_compute[256753]: 2025-12-07 10:19:36.881 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:36 np0005549474 nova_compute[256753]: 2025-12-07 10:19:36.922 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:36 np0005549474 nova_compute[256753]: 2025-12-07 10:19:36.922 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:37.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:19:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:37.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:38 np0005549474 podman[287572]: 2025-12-07 10:19:38.284374671 +0000 UTC m=+0.088661605 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  7 05:19:38 np0005549474 podman[287573]: 2025-12-07 10:19:38.308839337 +0000 UTC m=+0.113130282 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Dec  7 05:19:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:19:38.631 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:19:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:19:38.631 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:19:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:19:38.632 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:19:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:38.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:39.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:39] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:19:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:39] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:19:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:19:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:40.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:19:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:41 np0005549474 nova_compute[256753]: 2025-12-07 10:19:41.923 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:41 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:19:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:42.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:19:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:19:42
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', '.nfs', '.mgr']
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:19:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:19:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:19:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:43.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:44 np0005549474 podman[287622]: 2025-12-07 10:19:44.269409814 +0000 UTC m=+0.082274363 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:19:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:44.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:45.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:46.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:46 np0005549474 nova_compute[256753]: 2025-12-07 10:19:46.924 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:19:46 np0005549474 nova_compute[256753]: 2025-12-07 10:19:46.926 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:46 np0005549474 nova_compute[256753]: 2025-12-07 10:19:46.926 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:19:46 np0005549474 nova_compute[256753]: 2025-12-07 10:19:46.926 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:46 np0005549474 nova_compute[256753]: 2025-12-07 10:19:46.926 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:19:46 np0005549474 nova_compute[256753]: 2025-12-07 10:19:46.927 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:47.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:47.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:48.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:48.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:19:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:49.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:19:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:49] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:19:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:49] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:19:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:50 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-0[79964]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  7 05:19:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:51.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:51 np0005549474 nova_compute[256753]: 2025-12-07 10:19:51.927 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:19:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:54 np0005549474 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  7 05:19:54 np0005549474 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:19:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:19:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.716794022 +0000 UTC m=+0.062226686 container create 7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swartz, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.692565902 +0000 UTC m=+0.037998656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:19:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:55 np0005549474 systemd[1]: Started libpod-conmon-7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0.scope.
Dec  7 05:19:55 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.869902913 +0000 UTC m=+0.215335597 container init 7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.876670847 +0000 UTC m=+0.222103501 container start 7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.879710279 +0000 UTC m=+0.225142953 container attach 7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 05:19:55 np0005549474 zealous_swartz[287874]: 167 167
Dec  7 05:19:55 np0005549474 systemd[1]: libpod-7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0.scope: Deactivated successfully.
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.883107742 +0000 UTC m=+0.228540396 container died 7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swartz, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:19:55 np0005549474 systemd[1]: var-lib-containers-storage-overlay-693c69df8e6a411f64863b213a99cca83347b88779ccdd1c0a3e283da612627e-merged.mount: Deactivated successfully.
Dec  7 05:19:55 np0005549474 podman[287856]: 2025-12-07 10:19:55.928608612 +0000 UTC m=+0.274041276 container remove 7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:19:55 np0005549474 systemd[1]: libpod-conmon-7586d0ec64a8c2815fe1ad1c79e0ccd35141d37280db531e407c3c418aa5d9e0.scope: Deactivated successfully.
Dec  7 05:19:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:19:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:19:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:19:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:19:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.122944015 +0000 UTC m=+0.061696492 container create cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 05:19:56 np0005549474 systemd[1]: Started libpod-conmon-cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b.scope.
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.103486174 +0000 UTC m=+0.042238641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:19:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:19:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88219f6de3501ed37769065a5fba2284aeeb5a087412c68adb8ec11ef2c1d3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88219f6de3501ed37769065a5fba2284aeeb5a087412c68adb8ec11ef2c1d3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88219f6de3501ed37769065a5fba2284aeeb5a087412c68adb8ec11ef2c1d3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88219f6de3501ed37769065a5fba2284aeeb5a087412c68adb8ec11ef2c1d3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:56 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88219f6de3501ed37769065a5fba2284aeeb5a087412c68adb8ec11ef2c1d3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.240673801 +0000 UTC m=+0.179426288 container init cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_khorana, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.257343905 +0000 UTC m=+0.196096402 container start cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.261765686 +0000 UTC m=+0.200518153 container attach cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 05:19:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:56.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:56 np0005549474 epic_khorana[287918]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:19:56 np0005549474 epic_khorana[287918]: --> All data devices are unavailable
Dec  7 05:19:56 np0005549474 systemd[1]: libpod-cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b.scope: Deactivated successfully.
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.640381338 +0000 UTC m=+0.579133825 container died cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_khorana, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:19:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c88219f6de3501ed37769065a5fba2284aeeb5a087412c68adb8ec11ef2c1d3f-merged.mount: Deactivated successfully.
Dec  7 05:19:56 np0005549474 podman[287901]: 2025-12-07 10:19:56.694029549 +0000 UTC m=+0.632782016 container remove cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:19:56 np0005549474 systemd[1]: libpod-conmon-cfad81ccce2f399eb3a778184f51a4c430b01a8026ad6f6cead4fe719fcc717b.scope: Deactivated successfully.
Dec  7 05:19:56 np0005549474 nova_compute[256753]: 2025-12-07 10:19:56.927 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:56 np0005549474 nova_compute[256753]: 2025-12-07 10:19:56.930 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:19:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:19:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:57.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.356384809 +0000 UTC m=+0.062542814 container create 334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:19:57 np0005549474 systemd[1]: Started libpod-conmon-334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687.scope.
Dec  7 05:19:57 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.336680522 +0000 UTC m=+0.042838537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:19:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:19:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.450582075 +0000 UTC m=+0.156740090 container init 334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.460054753 +0000 UTC m=+0.166212788 container start 334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.466120309 +0000 UTC m=+0.172278324 container attach 334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:19:57 np0005549474 optimistic_kepler[288054]: 167 167
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.46878458 +0000 UTC m=+0.174942615 container died 334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 05:19:57 np0005549474 systemd[1]: libpod-334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687.scope: Deactivated successfully.
Dec  7 05:19:57 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f63b40add482cca30ceb48d85b5f64f1b13652650c5480a31af70a97895431dd-merged.mount: Deactivated successfully.
Dec  7 05:19:57 np0005549474 podman[288037]: 2025-12-07 10:19:57.521853376 +0000 UTC m=+0.228011401 container remove 334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:19:57 np0005549474 systemd[1]: libpod-conmon-334b70bf50b190520fd4784f399ae5501986d7bbec15e80fda191a4eb282f687.scope: Deactivated successfully.
Dec  7 05:19:57 np0005549474 podman[288077]: 2025-12-07 10:19:57.748709685 +0000 UTC m=+0.049975042 container create 9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:19:57 np0005549474 systemd[1]: Started libpod-conmon-9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096.scope.
Dec  7 05:19:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:57 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:19:57 np0005549474 podman[288077]: 2025-12-07 10:19:57.731354122 +0000 UTC m=+0.032619499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:19:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbb420ef64f985f567fa5b3b7b455c89ad37f472b3900fe9fe801f8ecaf9008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbb420ef64f985f567fa5b3b7b455c89ad37f472b3900fe9fe801f8ecaf9008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbb420ef64f985f567fa5b3b7b455c89ad37f472b3900fe9fe801f8ecaf9008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbb420ef64f985f567fa5b3b7b455c89ad37f472b3900fe9fe801f8ecaf9008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:57 np0005549474 podman[288077]: 2025-12-07 10:19:57.855897295 +0000 UTC m=+0.157162652 container init 9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhaskara, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 05:19:57 np0005549474 podman[288077]: 2025-12-07 10:19:57.864945121 +0000 UTC m=+0.166210518 container start 9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhaskara, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:19:57 np0005549474 podman[288077]: 2025-12-07 10:19:57.869827124 +0000 UTC m=+0.171092571 container attach 9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 05:19:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]: {
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:    "0": [
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:        {
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "devices": [
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "/dev/loop3"
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            ],
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "lv_name": "ceph_lv0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "lv_size": "21470642176",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "name": "ceph_lv0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "tags": {
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.cluster_name": "ceph",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.crush_device_class": "",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.encrypted": "0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.osd_id": "0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.type": "block",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.vdo": "0",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:                "ceph.with_tpm": "0"
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            },
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "type": "block",
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:            "vg_name": "ceph_vg0"
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:        }
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]:    ]
Dec  7 05:19:58 np0005549474 zealous_bhaskara[288093]: }
Dec  7 05:19:58 np0005549474 systemd[1]: libpod-9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096.scope: Deactivated successfully.
Dec  7 05:19:58 np0005549474 podman[288077]: 2025-12-07 10:19:58.238473074 +0000 UTC m=+0.539738531 container died 9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 05:19:58 np0005549474 systemd[1]: var-lib-containers-storage-overlay-5bbb420ef64f985f567fa5b3b7b455c89ad37f472b3900fe9fe801f8ecaf9008-merged.mount: Deactivated successfully.
Dec  7 05:19:58 np0005549474 podman[288077]: 2025-12-07 10:19:58.295107417 +0000 UTC m=+0.596372814 container remove 9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 05:19:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:19:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:58.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:19:58 np0005549474 systemd[1]: libpod-conmon-9f8bd2be96c9f8cc910727a18e048f7d8a2bd671faedb2e367a3025276c38096.scope: Deactivated successfully.
Dec  7 05:19:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:58.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:19:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:19:58.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:19:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.177358716 +0000 UTC m=+0.077913752 container create 50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.14885662 +0000 UTC m=+0.049411656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:19:59 np0005549474 systemd[1]: Started libpod-conmon-50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0.scope.
Dec  7 05:19:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.301451107 +0000 UTC m=+0.202006193 container init 50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.318618924 +0000 UTC m=+0.219173960 container start 50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.322981433 +0000 UTC m=+0.223536519 container attach 50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:19:59 np0005549474 hopeful_torvalds[288224]: 167 167
Dec  7 05:19:59 np0005549474 systemd[1]: libpod-50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0.scope: Deactivated successfully.
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.330042615 +0000 UTC m=+0.230597661 container died 50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:19:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-70546b4fb4c8b6680d1775007abd3e61ec8e98f0947eb62b766a47d5fad0e96a-merged.mount: Deactivated successfully.
Dec  7 05:19:59 np0005549474 podman[288209]: 2025-12-07 10:19:59.394378668 +0000 UTC m=+0.294933704 container remove 50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:19:59 np0005549474 systemd[1]: libpod-conmon-50339b27052147264a4951283820fc11a7432d5213faa940f7522f802a2747d0.scope: Deactivated successfully.
Dec  7 05:19:59 np0005549474 podman[288251]: 2025-12-07 10:19:59.686631787 +0000 UTC m=+0.100004135 container create ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  7 05:19:59 np0005549474 podman[288251]: 2025-12-07 10:19:59.642087104 +0000 UTC m=+0.055459512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:19:59 np0005549474 systemd[1]: Started libpod-conmon-ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a.scope.
Dec  7 05:19:59 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:19:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6577110f5e6b7355d94685459243a812238ac89d60361929a9bfeca81e56ebb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6577110f5e6b7355d94685459243a812238ac89d60361929a9bfeca81e56ebb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6577110f5e6b7355d94685459243a812238ac89d60361929a9bfeca81e56ebb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:59 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6577110f5e6b7355d94685459243a812238ac89d60361929a9bfeca81e56ebb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:19:59 np0005549474 podman[288251]: 2025-12-07 10:19:59.831542435 +0000 UTC m=+0.244914843 container init ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 05:19:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:19:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:19:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:59.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:19:59 np0005549474 podman[288251]: 2025-12-07 10:19:59.849058221 +0000 UTC m=+0.262430569 container start ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:19:59 np0005549474 podman[288251]: 2025-12-07 10:19:59.856412992 +0000 UTC m=+0.269785390 container attach ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:19:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:59] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:19:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:19:59] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1 is in error state
Dec  7 05:20:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:00 np0005549474 lvm[288342]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:20:00 np0005549474 lvm[288342]: VG ceph_vg0 finished
Dec  7 05:20:00 np0005549474 serene_lehmann[288267]: {}
Dec  7 05:20:00 np0005549474 systemd[1]: libpod-ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a.scope: Deactivated successfully.
Dec  7 05:20:00 np0005549474 podman[288251]: 2025-12-07 10:20:00.671810221 +0000 UTC m=+1.085182579 container died ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 05:20:00 np0005549474 systemd[1]: libpod-ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a.scope: Consumed 1.397s CPU time.
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec  7 05:20:00 np0005549474 ceph-mon[74516]:    daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1 is in error state
Dec  7 05:20:00 np0005549474 systemd[1]: var-lib-containers-storage-overlay-6577110f5e6b7355d94685459243a812238ac89d60361929a9bfeca81e56ebb1-merged.mount: Deactivated successfully.
Dec  7 05:20:00 np0005549474 podman[288251]: 2025-12-07 10:20:00.740743048 +0000 UTC m=+1.154115366 container remove ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:20:00 np0005549474 systemd[1]: libpod-conmon-ebe44cd984d42fc9600bcc5c8590aadcbf5e4d9a760eb7b6ecb1b87c3663694a.scope: Deactivated successfully.
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:20:00 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:20:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:20:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:01.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:01 np0005549474 nova_compute[256753]: 2025-12-07 10:20:01.930 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:01 np0005549474 nova_compute[256753]: 2025-12-07 10:20:01.932 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:01 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:20:01 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:20:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:20:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:03.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:06 np0005549474 nova_compute[256753]: 2025-12-07 10:20:06.934 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:06 np0005549474 nova_compute[256753]: 2025-12-07 10:20:06.936 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:06 np0005549474 nova_compute[256753]: 2025-12-07 10:20:06.936 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:06 np0005549474 nova_compute[256753]: 2025-12-07 10:20:06.936 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:06 np0005549474 nova_compute[256753]: 2025-12-07 10:20:06.979 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:06 np0005549474 nova_compute[256753]: 2025-12-07 10:20:06.980 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:07.202Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:07.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:20:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:07.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:20:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:08.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:08.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:20:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:08.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:08.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:09 np0005549474 podman[288419]: 2025-12-07 10:20:09.285466408 +0000 UTC m=+0.084670517 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  7 05:20:09 np0005549474 podman[288420]: 2025-12-07 10:20:09.32080231 +0000 UTC m=+0.120757460 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  7 05:20:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:09] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:20:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:09] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:20:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:10.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:11.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:11 np0005549474 nova_compute[256753]: 2025-12-07 10:20:11.978 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:12.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:20:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:20:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:20:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:20:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:20:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:20:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:20:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:20:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:13.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:15 np0005549474 podman[288467]: 2025-12-07 10:20:15.283404602 +0000 UTC m=+0.084637617 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:20:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:15.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:16 np0005549474 nova_compute[256753]: 2025-12-07 10:20:16.980 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:16 np0005549474 nova_compute[256753]: 2025-12-07 10:20:16.982 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:16 np0005549474 nova_compute[256753]: 2025-12-07 10:20:16.982 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:16 np0005549474 nova_compute[256753]: 2025-12-07 10:20:16.982 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:17 np0005549474 nova_compute[256753]: 2025-12-07 10:20:17.007 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:17 np0005549474 nova_compute[256753]: 2025-12-07 10:20:17.008 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:17.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:17.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:18.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:18 np0005549474 systemd[1]: session-56.scope: Deactivated successfully.
Dec  7 05:20:18 np0005549474 systemd[1]: session-56.scope: Consumed 2min 54.887s CPU time, 842.3M memory peak, read 335.3M from disk, written 204.9M to disk.
Dec  7 05:20:18 np0005549474 systemd-logind[796]: Session 56 logged out. Waiting for processes to exit.
Dec  7 05:20:18 np0005549474 systemd-logind[796]: Removed session 56.
Dec  7 05:20:18 np0005549474 systemd-logind[796]: New session 57 of user zuul.
Dec  7 05:20:18 np0005549474 systemd[1]: Started Session 57 of User zuul.
Dec  7 05:20:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:18.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:18.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:20:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:18.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:19 np0005549474 systemd[1]: session-57.scope: Deactivated successfully.
Dec  7 05:20:19 np0005549474 systemd-logind[796]: Session 57 logged out. Waiting for processes to exit.
Dec  7 05:20:19 np0005549474 systemd-logind[796]: Removed session 57.
Dec  7 05:20:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:19 np0005549474 systemd-logind[796]: New session 58 of user zuul.
Dec  7 05:20:19 np0005549474 systemd[1]: Started Session 58 of User zuul.
Dec  7 05:20:19 np0005549474 systemd[1]: session-58.scope: Deactivated successfully.
Dec  7 05:20:19 np0005549474 systemd-logind[796]: Session 58 logged out. Waiting for processes to exit.
Dec  7 05:20:19 np0005549474 systemd-logind[796]: Removed session 58.
Dec  7 05:20:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:19.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:20:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:20:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:20.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:21 np0005549474 nova_compute[256753]: 2025-12-07 10:20:21.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:21.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:22 np0005549474 nova_compute[256753]: 2025-12-07 10:20:22.009 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:22 np0005549474 nova_compute[256753]: 2025-12-07 10:20:22.011 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:22 np0005549474 nova_compute[256753]: 2025-12-07 10:20:22.011 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:22 np0005549474 nova_compute[256753]: 2025-12-07 10:20:22.011 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:22 np0005549474 nova_compute[256753]: 2025-12-07 10:20:22.038 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:22 np0005549474 nova_compute[256753]: 2025-12-07 10:20:22.039 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:25 np0005549474 nova_compute[256753]: 2025-12-07 10:20:25.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:20:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:20:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:26 np0005549474 nova_compute[256753]: 2025-12-07 10:20:26.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:26 np0005549474 nova_compute[256753]: 2025-12-07 10:20:26.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.040 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.042 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.042 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.042 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.067 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.067 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:27.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:20:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:20:27 np0005549474 nova_compute[256753]: 2025-12-07 10:20:27.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:27.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:28 np0005549474 nova_compute[256753]: 2025-12-07 10:20:28.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:28 np0005549474 nova_compute[256753]: 2025-12-07 10:20:28.788 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:20:28 np0005549474 nova_compute[256753]: 2025-12-07 10:20:28.788 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:20:28 np0005549474 nova_compute[256753]: 2025-12-07 10:20:28.788 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:20:28 np0005549474 nova_compute[256753]: 2025-12-07 10:20:28.788 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:20:28 np0005549474 nova_compute[256753]: 2025-12-07 10:20:28.789 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:20:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:28.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:28.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:20:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687243534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.216 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.406 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.407 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4479MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.407 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.408 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.480 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.480 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.495 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:20:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:29.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:20:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067814856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.951 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.958 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.978 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.980 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:20:29 np0005549474 nova_compute[256753]: 2025-12-07 10:20:29.980 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:20:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:20:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:20:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:30 np0005549474 nova_compute[256753]: 2025-12-07 10:20:30.976 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:31 np0005549474 nova_compute[256753]: 2025-12-07 10:20:31.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:31 np0005549474 nova_compute[256753]: 2025-12-07 10:20:31.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:31.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:32 np0005549474 nova_compute[256753]: 2025-12-07 10:20:32.068 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:32 np0005549474 nova_compute[256753]: 2025-12-07 10:20:32.069 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:32 np0005549474 nova_compute[256753]: 2025-12-07 10:20:32.069 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:32 np0005549474 nova_compute[256753]: 2025-12-07 10:20:32.070 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:32 np0005549474 nova_compute[256753]: 2025-12-07 10:20:32.071 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:32 np0005549474 nova_compute[256753]: 2025-12-07 10:20:32.074 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:32.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:33 np0005549474 nova_compute[256753]: 2025-12-07 10:20:33.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:33 np0005549474 nova_compute[256753]: 2025-12-07 10:20:33.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:20:33 np0005549474 nova_compute[256753]: 2025-12-07 10:20:33.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:20:33 np0005549474 nova_compute[256753]: 2025-12-07 10:20:33.773 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:20:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:34.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:36.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:37 np0005549474 nova_compute[256753]: 2025-12-07 10:20:37.075 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:37 np0005549474 nova_compute[256753]: 2025-12-07 10:20:37.076 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:37 np0005549474 nova_compute[256753]: 2025-12-07 10:20:37.076 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:37 np0005549474 nova_compute[256753]: 2025-12-07 10:20:37.077 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec  7 05:20:37 np0005549474 nova_compute[256753]: 2025-12-07 10:20:37.111 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:37 np0005549474 nova_compute[256753]: 2025-12-07 10:20:37.112 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:37.205Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:37.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:37.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:38.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:20:38.631 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:20:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:20:38.632 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:20:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:20:38.632 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:20:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:38.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:39.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:20:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:20:40 np0005549474 podman[288641]: 2025-12-07 10:20:40.252050319 +0000 UTC m=+0.065354942 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:20:40 np0005549474 podman[288642]: 2025-12-07 10:20:40.281610573 +0000 UTC m=+0.093640791 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:20:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:40.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:40 np0005549474 nova_compute[256753]: 2025-12-07 10:20:40.767 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:20:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec  7 05:20:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:42 np0005549474 nova_compute[256753]: 2025-12-07 10:20:42.113 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:42 np0005549474 nova_compute[256753]: 2025-12-07 10:20:42.115 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:42 np0005549474 nova_compute[256753]: 2025-12-07 10:20:42.115 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:42 np0005549474 nova_compute[256753]: 2025-12-07 10:20:42.115 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:42 np0005549474 nova_compute[256753]: 2025-12-07 10:20:42.164 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:42 np0005549474 nova_compute[256753]: 2025-12-07 10:20:42.165 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:20:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:20:42
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'images', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', '.nfs']
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:20:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:20:42 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:20:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:20:43 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2972 syncs, 3.65 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1698 writes, 6024 keys, 1698 commit groups, 1.0 writes per commit group, ingest: 7.23 MB, 0.01 MB/s#012Interval WAL: 1698 writes, 717 syncs, 2.37 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  7 05:20:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:46 np0005549474 podman[288692]: 2025-12-07 10:20:46.271923471 +0000 UTC m=+0.077217625 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  7 05:20:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:47 np0005549474 nova_compute[256753]: 2025-12-07 10:20:47.166 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:47 np0005549474 nova_compute[256753]: 2025-12-07 10:20:47.169 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:47 np0005549474 nova_compute[256753]: 2025-12-07 10:20:47.169 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:47 np0005549474 nova_compute[256753]: 2025-12-07 10:20:47.169 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:47 np0005549474 nova_compute[256753]: 2025-12-07 10:20:47.189 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:47 np0005549474 nova_compute[256753]: 2025-12-07 10:20:47.191 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:47.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:20:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:47.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:20:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:48.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:20:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:20:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:51.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:52 np0005549474 nova_compute[256753]: 2025-12-07 10:20:52.191 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:52 np0005549474 nova_compute[256753]: 2025-12-07 10:20:52.193 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:53.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:54.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:20:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:55.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:20:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:20:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:20:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:20:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:20:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:20:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:20:57 np0005549474 nova_compute[256753]: 2025-12-07 10:20:57.192 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:57 np0005549474 nova_compute[256753]: 2025-12-07 10:20:57.193 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:20:57 np0005549474 nova_compute[256753]: 2025-12-07 10:20:57.193 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:20:57 np0005549474 nova_compute[256753]: 2025-12-07 10:20:57.194 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:57 np0005549474 nova_compute[256753]: 2025-12-07 10:20:57.194 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:20:57 np0005549474 nova_compute[256753]: 2025-12-07 10:20:57.195 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:20:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:57.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:20:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:20:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:20:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:20:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:58.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:20:58.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:20:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:20:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:20:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:20:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:20:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:59] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  7 05:20:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:20:59] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec  7 05:21:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:21:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:21:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:21:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:21:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:21:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:21:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:21:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:21:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:21:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:21:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:21:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:21:01 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:22:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:22:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:22:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:22:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:22:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:22:41 np0005549474 rsyslogd[1010]: imjournal: 1257 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  7 05:22:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:42.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:22:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:22:42
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.nfs', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr']
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:22:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:22:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:42.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:42 np0005549474 nova_compute[256753]: 2025-12-07 10:22:42.648 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:22:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:22:43 np0005549474 podman[290717]: 2025-12-07 10:22:43.300349282 +0000 UTC m=+0.108021193 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:22:43 np0005549474 podman[290718]: 2025-12-07 10:22:43.327353757 +0000 UTC m=+0.129936229 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  7 05:22:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:22:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:44.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:44.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:44 np0005549474 nova_compute[256753]: 2025-12-07 10:22:44.778 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:22:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.540798) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965540844, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1426, "num_deletes": 506, "total_data_size": 2082974, "memory_usage": 2133040, "flush_reason": "Manual Compaction"}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965561532, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2035465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33272, "largest_seqno": 34697, "table_properties": {"data_size": 2029215, "index_size": 3004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16402, "raw_average_key_size": 18, "raw_value_size": 2014557, "raw_average_value_size": 2323, "num_data_blocks": 131, "num_entries": 867, "num_filter_entries": 867, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102863, "oldest_key_time": 1765102863, "file_creation_time": 1765102965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 20847 microseconds, and 10334 cpu microseconds.
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.561638) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2035465 bytes OK
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.561671) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.563481) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.563504) EVENT_LOG_v1 {"time_micros": 1765102965563496, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.563528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2075666, prev total WAL file size 2075666, number of live WAL files 2.
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.564589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1987KB)], [71(14MB)]
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965564631, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17579997, "oldest_snapshot_seqno": -1}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6652 keys, 15337114 bytes, temperature: kUnknown
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965769941, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 15337114, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15291866, "index_size": 27536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 174804, "raw_average_key_size": 26, "raw_value_size": 15171144, "raw_average_value_size": 2280, "num_data_blocks": 1087, "num_entries": 6652, "num_filter_entries": 6652, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765102965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.770386) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15337114 bytes
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.771771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.6 rd, 74.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 14.8 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(16.2) write-amplify(7.5) OK, records in: 7681, records dropped: 1029 output_compression: NoCompression
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.771809) EVENT_LOG_v1 {"time_micros": 1765102965771792, "job": 40, "event": "compaction_finished", "compaction_time_micros": 205428, "compaction_time_cpu_micros": 38747, "output_level": 6, "num_output_files": 1, "total_output_size": 15337114, "num_input_records": 7681, "num_output_records": 6652, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965772689, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965777945, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.564496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.777991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.777997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.778000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.778003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:22:45 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:22:45.778006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:22:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:22:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:22:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:22:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:22:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:46.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:46.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:22:47.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:22:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:22:47 np0005549474 nova_compute[256753]: 2025-12-07 10:22:47.651 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:22:47 np0005549474 nova_compute[256753]: 2025-12-07 10:22:47.653 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:22:47 np0005549474 nova_compute[256753]: 2025-12-07 10:22:47.653 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:22:47 np0005549474 nova_compute[256753]: 2025-12-07 10:22:47.653 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:22:47 np0005549474 nova_compute[256753]: 2025-12-07 10:22:47.696 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:22:47 np0005549474 nova_compute[256753]: 2025-12-07 10:22:47.696 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:22:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:22:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:22:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:48.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:22:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:48.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:22:48.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:22:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:22:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:22:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:22:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:22:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:22:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:50.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:50 np0005549474 podman[290796]: 2025-12-07 10:22:50.256879825 +0000 UTC m=+0.060337885 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  7 05:22:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:50.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:22:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:22:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:22:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:22:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:22:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:52.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:52.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:52 np0005549474 nova_compute[256753]: 2025-12-07 10:22:52.697 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:22:52 np0005549474 nova_compute[256753]: 2025-12-07 10:22:52.698 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:22:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:22:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:22:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:54.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:22:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:22:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:22:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:22:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:22:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:22:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:56.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:22:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:56.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:22:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:22:57.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:22:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:22:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:22:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:22:57 np0005549474 nova_compute[256753]: 2025-12-07 10:22:57.699 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:22:57 np0005549474 nova_compute[256753]: 2025-12-07 10:22:57.701 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:22:57 np0005549474 nova_compute[256753]: 2025-12-07 10:22:57.701 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:22:57 np0005549474 nova_compute[256753]: 2025-12-07 10:22:57.701 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:22:57 np0005549474 nova_compute[256753]: 2025-12-07 10:22:57.730 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:22:57 np0005549474 nova_compute[256753]: 2025-12-07 10:22:57.731 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:22:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:22:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:22:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:22:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:22:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:22:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:58.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:22:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:22:58.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:22:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:22:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:22:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:22:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:22:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:23:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:00.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:02.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:02.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:02 np0005549474 nova_compute[256753]: 2025-12-07 10:23:02.732 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:02 np0005549474 nova_compute[256753]: 2025-12-07 10:23:02.734 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:04.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:06.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:07.221Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:23:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:07.221Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:23:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:07.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:23:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:07 np0005549474 nova_compute[256753]: 2025-12-07 10:23:07.779 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:07 np0005549474 nova_compute[256753]: 2025-12-07 10:23:07.781 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:07 np0005549474 nova_compute[256753]: 2025-12-07 10:23:07.781 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5048 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:23:07 np0005549474 nova_compute[256753]: 2025-12-07 10:23:07.782 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:07 np0005549474 nova_compute[256753]: 2025-12-07 10:23:07.782 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:07 np0005549474 nova_compute[256753]: 2025-12-07 10:23:07.784 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:08.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:08.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:23:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:23:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:10.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:10.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:11 np0005549474 nova_compute[256753]: 2025-12-07 10:23:11.595 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:12.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:23:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:23:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:23:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:23:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:23:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:23:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:23:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:23:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:12 np0005549474 nova_compute[256753]: 2025-12-07 10:23:12.784 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:14.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:14 np0005549474 podman[290864]: 2025-12-07 10:23:14.247261506 +0000 UTC m=+0.063206153 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec  7 05:23:14 np0005549474 podman[290865]: 2025-12-07 10:23:14.29000875 +0000 UTC m=+0.097019584 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:23:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:23:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:15 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:23:15 np0005549474 podman[291085]: 2025-12-07 10:23:15.908687327 +0000 UTC m=+0.039273251 container create aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 05:23:15 np0005549474 systemd[1]: Started libpod-conmon-aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a.scope.
Dec  7 05:23:15 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:23:15 np0005549474 podman[291085]: 2025-12-07 10:23:15.891189861 +0000 UTC m=+0.021775805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:23:15 np0005549474 podman[291085]: 2025-12-07 10:23:15.987838173 +0000 UTC m=+0.118424107 container init aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:23:15 np0005549474 podman[291085]: 2025-12-07 10:23:15.994474843 +0000 UTC m=+0.125060757 container start aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_turing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  7 05:23:15 np0005549474 podman[291085]: 2025-12-07 10:23:15.997064834 +0000 UTC m=+0.127650758 container attach aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 05:23:15 np0005549474 systemd[1]: libpod-aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a.scope: Deactivated successfully.
Dec  7 05:23:16 np0005549474 flamboyant_turing[291101]: 167 167
Dec  7 05:23:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:16 np0005549474 conmon[291101]: conmon aa71b63a1aa640a1c923 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a.scope/container/memory.events
Dec  7 05:23:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:16 np0005549474 podman[291085]: 2025-12-07 10:23:16.001117965 +0000 UTC m=+0.131703909 container died aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_turing, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 05:23:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-414697e1ed163f1a03459e94235cbf3efe4b743c2e86c88a2dca97382a1ad592-merged.mount: Deactivated successfully.
Dec  7 05:23:16 np0005549474 podman[291085]: 2025-12-07 10:23:16.045076461 +0000 UTC m=+0.175662385 container remove aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_turing, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:23:16 np0005549474 systemd[1]: libpod-conmon-aa71b63a1aa640a1c9237d88797c2d1ce3a26b740f8b81d8afd5e1d13b3fb45a.scope: Deactivated successfully.
Dec  7 05:23:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:16.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.196968809 +0000 UTC m=+0.043121336 container create b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:23:16 np0005549474 systemd[1]: Started libpod-conmon-b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0.scope.
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.175250877 +0000 UTC m=+0.021403444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:23:16 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:23:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7662889d3c110fa1481ddb5a4adc22290e67f786897136e7e85bcfe458c29d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7662889d3c110fa1481ddb5a4adc22290e67f786897136e7e85bcfe458c29d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7662889d3c110fa1481ddb5a4adc22290e67f786897136e7e85bcfe458c29d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7662889d3c110fa1481ddb5a4adc22290e67f786897136e7e85bcfe458c29d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:16 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7662889d3c110fa1481ddb5a4adc22290e67f786897136e7e85bcfe458c29d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.29172115 +0000 UTC m=+0.137873717 container init b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.303546822 +0000 UTC m=+0.149699329 container start b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.306999296 +0000 UTC m=+0.153151833 container attach b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_johnson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 05:23:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:16 np0005549474 inspiring_johnson[291142]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:23:16 np0005549474 inspiring_johnson[291142]: --> All data devices are unavailable
Dec  7 05:23:16 np0005549474 systemd[1]: libpod-b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0.scope: Deactivated successfully.
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.697524622 +0000 UTC m=+0.543677119 container died b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 05:23:16 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f7662889d3c110fa1481ddb5a4adc22290e67f786897136e7e85bcfe458c29d7-merged.mount: Deactivated successfully.
Dec  7 05:23:16 np0005549474 podman[291126]: 2025-12-07 10:23:16.742771855 +0000 UTC m=+0.588924342 container remove b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 05:23:16 np0005549474 systemd[1]: libpod-conmon-b187a64e58efb8aeacba198faae8ab5935464d43e3f1df409ec5976d7d2b75f0.scope: Deactivated successfully.
Dec  7 05:23:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:17.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.26909687 +0000 UTC m=+0.046797275 container create e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:23:17 np0005549474 systemd[1]: Started libpod-conmon-e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f.scope.
Dec  7 05:23:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.254876582 +0000 UTC m=+0.032577017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.353405957 +0000 UTC m=+0.131106392 container init e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.364634953 +0000 UTC m=+0.142335358 container start e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.367774598 +0000 UTC m=+0.145475033 container attach e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:23:17 np0005549474 practical_diffie[291274]: 167 167
Dec  7 05:23:17 np0005549474 systemd[1]: libpod-e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f.scope: Deactivated successfully.
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.369102744 +0000 UTC m=+0.146803159 container died e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 05:23:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:23:17 np0005549474 systemd[1]: var-lib-containers-storage-overlay-90f1081b68a5c7232669c23dc6fd4167a31cdc5222761d75fe6134773969e31d-merged.mount: Deactivated successfully.
Dec  7 05:23:17 np0005549474 podman[291258]: 2025-12-07 10:23:17.408312372 +0000 UTC m=+0.186012797 container remove e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 05:23:17 np0005549474 systemd[1]: libpod-conmon-e7b6c76655829e7ab4fdb374a9e1c99fbdf108c6d8d97889ac05e78f2c2bbd6f.scope: Deactivated successfully.
Dec  7 05:23:17 np0005549474 podman[291298]: 2025-12-07 10:23:17.585349423 +0000 UTC m=+0.039803075 container create 30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 05:23:17 np0005549474 systemd[1]: Started libpod-conmon-30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9.scope.
Dec  7 05:23:17 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:23:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55d9524db4620641f473bc76b27f3ef07d45c43c5eb7bf41a9f14a825795e40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55d9524db4620641f473bc76b27f3ef07d45c43c5eb7bf41a9f14a825795e40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55d9524db4620641f473bc76b27f3ef07d45c43c5eb7bf41a9f14a825795e40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:17 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55d9524db4620641f473bc76b27f3ef07d45c43c5eb7bf41a9f14a825795e40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:17 np0005549474 podman[291298]: 2025-12-07 10:23:17.660360017 +0000 UTC m=+0.114813689 container init 30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:23:17 np0005549474 podman[291298]: 2025-12-07 10:23:17.56795787 +0000 UTC m=+0.022411572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:23:17 np0005549474 podman[291298]: 2025-12-07 10:23:17.670568565 +0000 UTC m=+0.125022217 container start 30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mirzakhani, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:23:17 np0005549474 podman[291298]: 2025-12-07 10:23:17.673593777 +0000 UTC m=+0.128047449 container attach 30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:23:17 np0005549474 nova_compute[256753]: 2025-12-07 10:23:17.787 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:17 np0005549474 nova_compute[256753]: 2025-12-07 10:23:17.790 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:17 np0005549474 nova_compute[256753]: 2025-12-07 10:23:17.790 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:23:17 np0005549474 nova_compute[256753]: 2025-12-07 10:23:17.790 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:17 np0005549474 nova_compute[256753]: 2025-12-07 10:23:17.827 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:17 np0005549474 nova_compute[256753]: 2025-12-07 10:23:17.828 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]: {
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:    "0": [
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:        {
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "devices": [
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "/dev/loop3"
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            ],
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "lv_name": "ceph_lv0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "lv_size": "21470642176",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "name": "ceph_lv0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "tags": {
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.cluster_name": "ceph",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.crush_device_class": "",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.encrypted": "0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.osd_id": "0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.type": "block",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.vdo": "0",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:                "ceph.with_tpm": "0"
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            },
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "type": "block",
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:            "vg_name": "ceph_vg0"
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:        }
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]:    ]
Dec  7 05:23:17 np0005549474 ecstatic_mirzakhani[291315]: }
Dec  7 05:23:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:17 np0005549474 systemd[1]: libpod-30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9.scope: Deactivated successfully.
Dec  7 05:23:17 np0005549474 podman[291325]: 2025-12-07 10:23:17.978737348 +0000 UTC m=+0.021406864 container died 30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:23:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d55d9524db4620641f473bc76b27f3ef07d45c43c5eb7bf41a9f14a825795e40-merged.mount: Deactivated successfully.
Dec  7 05:23:18 np0005549474 podman[291325]: 2025-12-07 10:23:18.033011326 +0000 UTC m=+0.075680862 container remove 30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:23:18 np0005549474 systemd[1]: libpod-conmon-30ea9536316d601f8d98f1032579842ed993f4e9bbb0445bc4b5c5c6386c56e9.scope: Deactivated successfully.
Dec  7 05:23:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:18.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.705985496 +0000 UTC m=+0.068157217 container create 33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bassi, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 05:23:18 np0005549474 systemd[1]: Started libpod-conmon-33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f.scope.
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.670913221 +0000 UTC m=+0.033085012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:23:18 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.797787026 +0000 UTC m=+0.159958807 container init 33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.808138409 +0000 UTC m=+0.170310100 container start 33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bassi, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.812024604 +0000 UTC m=+0.174196335 container attach 33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 05:23:18 np0005549474 happy_bassi[291449]: 167 167
Dec  7 05:23:18 np0005549474 systemd[1]: libpod-33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f.scope: Deactivated successfully.
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.816709382 +0000 UTC m=+0.178881073 container died 33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 05:23:18 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3e5ea7623a3948e93672039dd339c01098febfb1f224aabba69fead7a981b955-merged.mount: Deactivated successfully.
Dec  7 05:23:18 np0005549474 podman[291432]: 2025-12-07 10:23:18.866878028 +0000 UTC m=+0.229049749 container remove 33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 05:23:18 np0005549474 systemd[1]: libpod-conmon-33d7649273da2ff0d70828c46fa1dc023916988b26f6cdb788ca26aa1614736f.scope: Deactivated successfully.
Dec  7 05:23:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:18.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:19 np0005549474 podman[291476]: 2025-12-07 10:23:19.083167829 +0000 UTC m=+0.047809063 container create 415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 05:23:19 np0005549474 systemd[1]: Started libpod-conmon-415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810.scope.
Dec  7 05:23:19 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:23:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61cdfbe534046ef9755441a57cbcc01631ccf9058c3f75c71e0f046f25bd748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61cdfbe534046ef9755441a57cbcc01631ccf9058c3f75c71e0f046f25bd748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61cdfbe534046ef9755441a57cbcc01631ccf9058c3f75c71e0f046f25bd748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:19 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61cdfbe534046ef9755441a57cbcc01631ccf9058c3f75c71e0f046f25bd748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:23:19 np0005549474 podman[291476]: 2025-12-07 10:23:19.064167222 +0000 UTC m=+0.028808486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:23:19 np0005549474 podman[291476]: 2025-12-07 10:23:19.170247701 +0000 UTC m=+0.134888985 container init 415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:23:19 np0005549474 podman[291476]: 2025-12-07 10:23:19.182296749 +0000 UTC m=+0.146937993 container start 415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:23:19 np0005549474 podman[291476]: 2025-12-07 10:23:19.186084393 +0000 UTC m=+0.150725707 container attach 415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Dec  7 05:23:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:19 np0005549474 lvm[291569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:23:19 np0005549474 lvm[291569]: VG ceph_vg0 finished
Dec  7 05:23:19 np0005549474 ecstatic_mclaren[291493]: {}
Dec  7 05:23:19 np0005549474 systemd[1]: libpod-415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810.scope: Deactivated successfully.
Dec  7 05:23:19 np0005549474 systemd[1]: libpod-415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810.scope: Consumed 1.214s CPU time.
Dec  7 05:23:19 np0005549474 podman[291476]: 2025-12-07 10:23:19.949288189 +0000 UTC m=+0.913929453 container died 415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:23:19 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b61cdfbe534046ef9755441a57cbcc01631ccf9058c3f75c71e0f046f25bd748-merged.mount: Deactivated successfully.
Dec  7 05:23:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:19] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  7 05:23:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:19] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  7 05:23:20 np0005549474 podman[291476]: 2025-12-07 10:23:19.999856387 +0000 UTC m=+0.964497631 container remove 415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclaren, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 05:23:20 np0005549474 systemd[1]: libpod-conmon-415ca855560f554b10dc878bf9c123634dee9198bfc3c88beb4acd2c68566810.scope: Deactivated successfully.
Dec  7 05:23:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:23:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:20.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:20 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:23:20 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:20 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:23:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:21 np0005549474 podman[291609]: 2025-12-07 10:23:21.292841415 +0000 UTC m=+0.090244209 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  7 05:23:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:23:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:22.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:22.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:22 np0005549474 nova_compute[256753]: 2025-12-07 10:23:22.829 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:22 np0005549474 nova_compute[256753]: 2025-12-07 10:23:22.831 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:22 np0005549474 nova_compute[256753]: 2025-12-07 10:23:22.831 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:23:22 np0005549474 nova_compute[256753]: 2025-12-07 10:23:22.831 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:22 np0005549474 nova_compute[256753]: 2025-12-07 10:23:22.864 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:22 np0005549474 nova_compute[256753]: 2025-12-07 10:23:22.865 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:23:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:24.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:24.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec  7 05:23:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:26.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:26 np0005549474 nova_compute[256753]: 2025-12-07 10:23:26.826 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:27.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:23:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:27.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:23:27 np0005549474 nova_compute[256753]: 2025-12-07 10:23:27.866 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.955329) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103007955416, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 624, "num_deletes": 251, "total_data_size": 834857, "memory_usage": 845632, "flush_reason": "Manual Compaction"}
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103007964379, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 580314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34698, "largest_seqno": 35321, "table_properties": {"data_size": 577345, "index_size": 877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8026, "raw_average_key_size": 20, "raw_value_size": 571159, "raw_average_value_size": 1475, "num_data_blocks": 38, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102966, "oldest_key_time": 1765102966, "file_creation_time": 1765103007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 9070 microseconds, and 4663 cpu microseconds.
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.964439) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 580314 bytes OK
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.964467) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.966336) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.966361) EVENT_LOG_v1 {"time_micros": 1765103007966353, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.966384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 831534, prev total WAL file size 831534, number of live WAL files 2.
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.967161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(566KB)], [74(14MB)]
Dec  7 05:23:27 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103007967234, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15917428, "oldest_snapshot_seqno": -1}
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6538 keys, 12064079 bytes, temperature: kUnknown
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103008031156, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12064079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12023918, "index_size": 22705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172612, "raw_average_key_size": 26, "raw_value_size": 11909476, "raw_average_value_size": 1821, "num_data_blocks": 887, "num_entries": 6538, "num_filter_entries": 6538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765103007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.031522) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12064079 bytes
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.032679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.4 rd, 188.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.6 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(48.2) write-amplify(20.8) OK, records in: 7039, records dropped: 501 output_compression: NoCompression
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.032704) EVENT_LOG_v1 {"time_micros": 1765103008032693, "job": 42, "event": "compaction_finished", "compaction_time_micros": 64087, "compaction_time_cpu_micros": 29370, "output_level": 6, "num_output_files": 1, "total_output_size": 12064079, "num_input_records": 7039, "num_output_records": 6538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103008032976, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103008037667, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:27.967050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.037844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.037853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.037856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.037859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:23:28.037862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:23:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:23:28 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1217218982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:23:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:28.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 171 op/s
Dec  7 05:23:29 np0005549474 nova_compute[256753]: 2025-12-07 10:23:29.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:29 np0005549474 nova_compute[256753]: 2025-12-07 10:23:29.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:29 np0005549474 nova_compute[256753]: 2025-12-07 10:23:29.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:29 np0005549474 nova_compute[256753]: 2025-12-07 10:23:29.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:23:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:29] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:23:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:29] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:23:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec  7 05:23:31 np0005549474 nova_compute[256753]: 2025-12-07 10:23:31.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:31 np0005549474 nova_compute[256753]: 2025-12-07 10:23:31.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:23:31 np0005549474 nova_compute[256753]: 2025-12-07 10:23:31.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:23:31 np0005549474 nova_compute[256753]: 2025-12-07 10:23:31.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:23:31 np0005549474 nova_compute[256753]: 2025-12-07 10:23:31.782 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:23:31 np0005549474 nova_compute[256753]: 2025-12-07 10:23:31.782 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:23:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:23:32 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2359469985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.189 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.419 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.420 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4474MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.420 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.421 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.544 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.545 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.586 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:23:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:32.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:32 np0005549474 nova_compute[256753]: 2025-12-07 10:23:32.867 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:23:33 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785250674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:23:33 np0005549474 nova_compute[256753]: 2025-12-07 10:23:33.045 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:23:33 np0005549474 nova_compute[256753]: 2025-12-07 10:23:33.050 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:23:33 np0005549474 nova_compute[256753]: 2025-12-07 10:23:33.083 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:23:33 np0005549474 nova_compute[256753]: 2025-12-07 10:23:33.084 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:23:33 np0005549474 nova_compute[256753]: 2025-12-07 10:23:33.084 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:23:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec  7 05:23:34 np0005549474 nova_compute[256753]: 2025-12-07 10:23:34.080 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:34.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:34.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:34 np0005549474 nova_compute[256753]: 2025-12-07 10:23:34.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:34 np0005549474 nova_compute[256753]: 2025-12-07 10:23:34.868 256757 DEBUG oslo_concurrency.processutils [None req-df4cc2da-b1d5-430c-beef-9facdb501d68 24eb8006efd340518863613cf711b1e6 f2774f82d095448bbb688700083cf81d - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:23:34 np0005549474 nova_compute[256753]: 2025-12-07 10:23:34.891 256757 DEBUG oslo_concurrency.processutils [None req-df4cc2da-b1d5-430c-beef-9facdb501d68 24eb8006efd340518863613cf711b1e6 f2774f82d095448bbb688700083cf81d - - default default] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:23:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 171 op/s
Dec  7 05:23:35 np0005549474 nova_compute[256753]: 2025-12-07 10:23:35.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:35 np0005549474 nova_compute[256753]: 2025-12-07 10:23:35.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:23:35 np0005549474 nova_compute[256753]: 2025-12-07 10:23:35.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:23:35 np0005549474 nova_compute[256753]: 2025-12-07 10:23:35.775 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:23:35 np0005549474 nova_compute[256753]: 2025-12-07 10:23:35.775 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:23:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:36.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:37.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:37 np0005549474 nova_compute[256753]: 2025-12-07 10:23:37.915 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:37 np0005549474 nova_compute[256753]: 2025-12-07 10:23:37.916 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:37 np0005549474 nova_compute[256753]: 2025-12-07 10:23:37.917 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5047 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:23:37 np0005549474 nova_compute[256753]: 2025-12-07 10:23:37.917 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:37 np0005549474 nova_compute[256753]: 2025-12-07 10:23:37.918 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:37 np0005549474 nova_compute[256753]: 2025-12-07 10:23:37.921 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:38.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:23:38.634 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:23:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:23:38.634 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:23:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:23:38.634 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:23:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:38.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:23:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:38.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:23:39.892 164143 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  7 05:23:39 np0005549474 nova_compute[256753]: 2025-12-07 10:23:39.893 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:39 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:23:39.894 164143 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  7 05:23:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:39] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:23:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:39] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec  7 05:23:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:40.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:40.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:40 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:23:40.896 164143 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8da81261-a5d6-4df8-aa54-d9c0c8f72a67, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  7 05:23:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:42.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:23:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:23:42
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'volumes', '.nfs', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log']
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:23:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:23:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:42 np0005549474 nova_compute[256753]: 2025-12-07 10:23:42.956 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:23:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:44.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:46.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:46 np0005549474 podman[291724]: 2025-12-07 10:23:46.157935003 +0000 UTC m=+0.068032146 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  7 05:23:46 np0005549474 podman[291726]: 2025-12-07 10:23:46.184057615 +0000 UTC m=+0.095033502 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  7 05:23:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:47.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:47 np0005549474 nova_compute[256753]: 2025-12-07 10:23:47.959 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:47 np0005549474 nova_compute[256753]: 2025-12-07 10:23:47.961 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:47 np0005549474 nova_compute[256753]: 2025-12-07 10:23:47.961 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:23:47 np0005549474 nova_compute[256753]: 2025-12-07 10:23:47.961 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:48 np0005549474 nova_compute[256753]: 2025-12-07 10:23:48.015 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:48 np0005549474 nova_compute[256753]: 2025-12-07 10:23:48.016 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:48.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:48.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:23:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:23:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:50.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:52 np0005549474 podman[291798]: 2025-12-07 10:23:52.243969844 +0000 UTC m=+0.050623152 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  7 05:23:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:53 np0005549474 nova_compute[256753]: 2025-12-07 10:23:53.016 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:53 np0005549474 nova_compute[256753]: 2025-12-07 10:23:53.018 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:54.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:54.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:23:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:23:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:23:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:23:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:23:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:23:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:23:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:57.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:23:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:23:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:23:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:23:58 np0005549474 nova_compute[256753]: 2025-12-07 10:23:58.019 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:58 np0005549474 nova_compute[256753]: 2025-12-07 10:23:58.021 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:23:58 np0005549474 nova_compute[256753]: 2025-12-07 10:23:58.021 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:23:58 np0005549474 nova_compute[256753]: 2025-12-07 10:23:58.021 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:58 np0005549474 nova_compute[256753]: 2025-12-07 10:23:58.062 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:23:58 np0005549474 nova_compute[256753]: 2025-12-07 10:23:58.062 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:23:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:23:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:58.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:23:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:23:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:23:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:23:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:58.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:23:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:23:58.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:23:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:23:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:59] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:23:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:23:59] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:24:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:00.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:02.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:02.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:24:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755057413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:24:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:24:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755057413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:24:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:03 np0005549474 nova_compute[256753]: 2025-12-07 10:24:03.063 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:03 np0005549474 nova_compute[256753]: 2025-12-07 10:24:03.064 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:03 np0005549474 nova_compute[256753]: 2025-12-07 10:24:03.065 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:03 np0005549474 nova_compute[256753]: 2025-12-07 10:24:03.065 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:03 np0005549474 nova_compute[256753]: 2025-12-07 10:24:03.066 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:03 np0005549474 nova_compute[256753]: 2025-12-07 10:24:03.067 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:04.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:06.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:06.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:07.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:08 np0005549474 nova_compute[256753]: 2025-12-07 10:24:08.068 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:08 np0005549474 nova_compute[256753]: 2025-12-07 10:24:08.070 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:08 np0005549474 nova_compute[256753]: 2025-12-07 10:24:08.071 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:08 np0005549474 nova_compute[256753]: 2025-12-07 10:24:08.071 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:08 np0005549474 nova_compute[256753]: 2025-12-07 10:24:08.130 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:08 np0005549474 nova_compute[256753]: 2025-12-07 10:24:08.130 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:08.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:09] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:24:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:09] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec  7 05:24:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:24:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:12.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:24:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:24:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:24:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:24:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:24:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:24:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:24:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:24:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:24:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:12.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:13 np0005549474 nova_compute[256753]: 2025-12-07 10:24:13.176 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:13 np0005549474 nova_compute[256753]: 2025-12-07 10:24:13.177 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:13 np0005549474 nova_compute[256753]: 2025-12-07 10:24:13.177 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5046 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:13 np0005549474 nova_compute[256753]: 2025-12-07 10:24:13.177 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:13 np0005549474 nova_compute[256753]: 2025-12-07 10:24:13.178 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:13 np0005549474 nova_compute[256753]: 2025-12-07 10:24:13.181 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:16.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:17.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:17 np0005549474 podman[291872]: 2025-12-07 10:24:17.288306736 +0000 UTC m=+0.091881636 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 05:24:17 np0005549474 podman[291873]: 2025-12-07 10:24:17.314402098 +0000 UTC m=+0.119472169 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:24:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:18 np0005549474 nova_compute[256753]: 2025-12-07 10:24:18.182 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:18 np0005549474 nova_compute[256753]: 2025-12-07 10:24:18.184 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:18 np0005549474 nova_compute[256753]: 2025-12-07 10:24:18.185 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:18 np0005549474 nova_compute[256753]: 2025-12-07 10:24:18.185 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:18 np0005549474 nova_compute[256753]: 2025-12-07 10:24:18.236 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:18 np0005549474 nova_compute[256753]: 2025-12-07 10:24:18.236 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:18.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:18.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:19] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:24:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:19] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:24:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:20.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:20.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:24:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:21 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:24:21 np0005549474 podman[292095]: 2025-12-07 10:24:21.868939182 +0000 UTC m=+0.055594846 container create 97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec  7 05:24:21 np0005549474 systemd[1]: Started libpod-conmon-97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa.scope.
Dec  7 05:24:21 np0005549474 podman[292095]: 2025-12-07 10:24:21.842736118 +0000 UTC m=+0.029391862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:24:21 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:24:21 np0005549474 podman[292095]: 2025-12-07 10:24:21.957776055 +0000 UTC m=+0.144431759 container init 97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mccarthy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:24:21 np0005549474 podman[292095]: 2025-12-07 10:24:21.9664061 +0000 UTC m=+0.153061754 container start 97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:24:21 np0005549474 podman[292095]: 2025-12-07 10:24:21.969243087 +0000 UTC m=+0.155898761 container attach 97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 05:24:21 np0005549474 systemd[1]: libpod-97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa.scope: Deactivated successfully.
Dec  7 05:24:21 np0005549474 wizardly_mccarthy[292112]: 167 167
Dec  7 05:24:21 np0005549474 podman[292095]: 2025-12-07 10:24:21.973772141 +0000 UTC m=+0.160427815 container died 97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 05:24:21 np0005549474 conmon[292112]: conmon 97faaf7faa3ee2dc591b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa.scope/container/memory.events
Dec  7 05:24:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-624e9e99738d6a334743c401446587689c47698276d77c716a2b048468ca5d6c-merged.mount: Deactivated successfully.
Dec  7 05:24:22 np0005549474 podman[292095]: 2025-12-07 10:24:22.024087343 +0000 UTC m=+0.210743007 container remove 97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mccarthy, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:24:22 np0005549474 systemd[1]: libpod-conmon-97faaf7faa3ee2dc591bfd152fb9d9db35d8a2992d9745fb8ea6928b87d051fa.scope: Deactivated successfully.
Dec  7 05:24:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.240319728 +0000 UTC m=+0.054391623 container create fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_aryabhata, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:24:22 np0005549474 systemd[1]: Started libpod-conmon-fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d.scope.
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.214309299 +0000 UTC m=+0.028381204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:24:22 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:24:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51f81fc6cd08fe2121f8c6d4b1080ce6ec52d8cf7bbfa94a93c586455ccf474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51f81fc6cd08fe2121f8c6d4b1080ce6ec52d8cf7bbfa94a93c586455ccf474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51f81fc6cd08fe2121f8c6d4b1080ce6ec52d8cf7bbfa94a93c586455ccf474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51f81fc6cd08fe2121f8c6d4b1080ce6ec52d8cf7bbfa94a93c586455ccf474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:22 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51f81fc6cd08fe2121f8c6d4b1080ce6ec52d8cf7bbfa94a93c586455ccf474/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.343711577 +0000 UTC m=+0.157783472 container init fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.354098701 +0000 UTC m=+0.168170606 container start fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_aryabhata, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.36400973 +0000 UTC m=+0.178081605 container attach fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 05:24:22 np0005549474 podman[292156]: 2025-12-07 10:24:22.419227186 +0000 UTC m=+0.101066766 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:24:22 np0005549474 affectionate_aryabhata[292154]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:24:22 np0005549474 affectionate_aryabhata[292154]: --> All data devices are unavailable
Dec  7 05:24:22 np0005549474 systemd[1]: libpod-fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d.scope: Deactivated successfully.
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.698349126 +0000 UTC m=+0.512421001 container died fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_aryabhata, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:24:22 np0005549474 systemd[1]: var-lib-containers-storage-overlay-a51f81fc6cd08fe2121f8c6d4b1080ce6ec52d8cf7bbfa94a93c586455ccf474-merged.mount: Deactivated successfully.
Dec  7 05:24:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:22.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:22 np0005549474 podman[292137]: 2025-12-07 10:24:22.734502701 +0000 UTC m=+0.548574546 container remove fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Dec  7 05:24:22 np0005549474 systemd[1]: libpod-conmon-fec8311ce861eca2a0ef0c1ae56c142ddc62975f96f858c93c0de633ef38743d.scope: Deactivated successfully.
Dec  7 05:24:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:23 np0005549474 nova_compute[256753]: 2025-12-07 10:24:23.237 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:23 np0005549474 nova_compute[256753]: 2025-12-07 10:24:23.239 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:23 np0005549474 nova_compute[256753]: 2025-12-07 10:24:23.239 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:23 np0005549474 nova_compute[256753]: 2025-12-07 10:24:23.239 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:24:23 np0005549474 nova_compute[256753]: 2025-12-07 10:24:23.300 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:23 np0005549474 nova_compute[256753]: 2025-12-07 10:24:23.301 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.503125648 +0000 UTC m=+0.100242374 container create 1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.427775084 +0000 UTC m=+0.024891790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:24:23 np0005549474 systemd[1]: Started libpod-conmon-1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e.scope.
Dec  7 05:24:23 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.587894929 +0000 UTC m=+0.185011665 container init 1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.600619336 +0000 UTC m=+0.197736032 container start 1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.604031449 +0000 UTC m=+0.201148175 container attach 1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:24:23 np0005549474 lucid_archimedes[292310]: 167 167
Dec  7 05:24:23 np0005549474 systemd[1]: libpod-1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e.scope: Deactivated successfully.
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.606384804 +0000 UTC m=+0.203501520 container died 1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_archimedes, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:24:23 np0005549474 systemd[1]: var-lib-containers-storage-overlay-10c2d11a39a5208107dc368b5848517285d7792bd51936d8534359ac416be957-merged.mount: Deactivated successfully.
Dec  7 05:24:23 np0005549474 podman[292294]: 2025-12-07 10:24:23.642888559 +0000 UTC m=+0.240005245 container remove 1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:24:23 np0005549474 systemd[1]: libpod-conmon-1aeeaf10769d028aab118055ef44af14ec994b1f1e7654c3991855d05fbd897e.scope: Deactivated successfully.
Dec  7 05:24:23 np0005549474 podman[292335]: 2025-12-07 10:24:23.856147723 +0000 UTC m=+0.057113799 container create d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 05:24:23 np0005549474 systemd[1]: Started libpod-conmon-d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e.scope.
Dec  7 05:24:23 np0005549474 podman[292335]: 2025-12-07 10:24:23.83000835 +0000 UTC m=+0.030974486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:24:23 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:24:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110166cf4b24b7bd794f861fceff471eaf8fd96a14696ef99dd796dd62a9da2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110166cf4b24b7bd794f861fceff471eaf8fd96a14696ef99dd796dd62a9da2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110166cf4b24b7bd794f861fceff471eaf8fd96a14696ef99dd796dd62a9da2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:23 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110166cf4b24b7bd794f861fceff471eaf8fd96a14696ef99dd796dd62a9da2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:23 np0005549474 podman[292335]: 2025-12-07 10:24:23.952245723 +0000 UTC m=+0.153211779 container init d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 05:24:23 np0005549474 podman[292335]: 2025-12-07 10:24:23.960469257 +0000 UTC m=+0.161435303 container start d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:24:23 np0005549474 podman[292335]: 2025-12-07 10:24:23.963800388 +0000 UTC m=+0.164766434 container attach d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 05:24:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]: {
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:    "0": [
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:        {
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "devices": [
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "/dev/loop3"
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            ],
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "lv_name": "ceph_lv0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "lv_size": "21470642176",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "name": "ceph_lv0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "tags": {
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.cluster_name": "ceph",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.crush_device_class": "",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.encrypted": "0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.osd_id": "0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.type": "block",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.vdo": "0",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:                "ceph.with_tpm": "0"
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            },
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "type": "block",
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:            "vg_name": "ceph_vg0"
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:        }
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]:    ]
Dec  7 05:24:24 np0005549474 amazing_thompson[292351]: }
Dec  7 05:24:24 np0005549474 systemd[1]: libpod-d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e.scope: Deactivated successfully.
Dec  7 05:24:24 np0005549474 podman[292335]: 2025-12-07 10:24:24.309924935 +0000 UTC m=+0.510891011 container died d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:24:24 np0005549474 systemd[1]: var-lib-containers-storage-overlay-110166cf4b24b7bd794f861fceff471eaf8fd96a14696ef99dd796dd62a9da2e-merged.mount: Deactivated successfully.
Dec  7 05:24:24 np0005549474 podman[292335]: 2025-12-07 10:24:24.357236405 +0000 UTC m=+0.558202441 container remove d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:24:24 np0005549474 systemd[1]: libpod-conmon-d21db38bed99d73cf30bde017cf5da38200d7922213d06c7b8411c1ddb57f27e.scope: Deactivated successfully.
Dec  7 05:24:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:24.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:24 np0005549474 nova_compute[256753]: 2025-12-07 10:24:24.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:24 np0005549474 podman[292466]: 2025-12-07 10:24:24.998993392 +0000 UTC m=+0.057175750 container create 695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_nightingale, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:24:25 np0005549474 systemd[1]: Started libpod-conmon-695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70.scope.
Dec  7 05:24:25 np0005549474 podman[292466]: 2025-12-07 10:24:24.975105471 +0000 UTC m=+0.033287919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:24:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:24:25 np0005549474 podman[292466]: 2025-12-07 10:24:25.091774212 +0000 UTC m=+0.149956650 container init 695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:24:25 np0005549474 podman[292466]: 2025-12-07 10:24:25.100331165 +0000 UTC m=+0.158513503 container start 695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:24:25 np0005549474 podman[292466]: 2025-12-07 10:24:25.103104821 +0000 UTC m=+0.161287249 container attach 695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:24:25 np0005549474 interesting_nightingale[292483]: 167 167
Dec  7 05:24:25 np0005549474 systemd[1]: libpod-695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70.scope: Deactivated successfully.
Dec  7 05:24:25 np0005549474 podman[292466]: 2025-12-07 10:24:25.110328317 +0000 UTC m=+0.168510685 container died 695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:24:25 np0005549474 systemd[1]: var-lib-containers-storage-overlay-560a312408897a47e0c0459bf58a9ec005042019616e818515d8f85091fbeda8-merged.mount: Deactivated successfully.
Dec  7 05:24:25 np0005549474 podman[292466]: 2025-12-07 10:24:25.152541929 +0000 UTC m=+0.210724297 container remove 695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 05:24:25 np0005549474 systemd[1]: libpod-conmon-695227641fe92a81c630bb6d179bdef306e03345e951046b3b199939a1a2cd70.scope: Deactivated successfully.
Dec  7 05:24:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:25 np0005549474 podman[292509]: 2025-12-07 10:24:25.374793248 +0000 UTC m=+0.072991281 container create 6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_snyder, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 05:24:25 np0005549474 systemd[1]: Started libpod-conmon-6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57.scope.
Dec  7 05:24:25 np0005549474 podman[292509]: 2025-12-07 10:24:25.346225469 +0000 UTC m=+0.044423582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:24:25 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:24:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e08cea466d5cd443541ba70d0bf7aa978354a047ca5bd6b43c716975db257/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e08cea466d5cd443541ba70d0bf7aa978354a047ca5bd6b43c716975db257/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e08cea466d5cd443541ba70d0bf7aa978354a047ca5bd6b43c716975db257/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:25 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e08cea466d5cd443541ba70d0bf7aa978354a047ca5bd6b43c716975db257/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:24:25 np0005549474 podman[292509]: 2025-12-07 10:24:25.471170355 +0000 UTC m=+0.169368448 container init 6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 05:24:25 np0005549474 podman[292509]: 2025-12-07 10:24:25.489910477 +0000 UTC m=+0.188108550 container start 6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:24:25 np0005549474 podman[292509]: 2025-12-07 10:24:25.494290497 +0000 UTC m=+0.192488610 container attach 6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_snyder, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 05:24:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:26.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:26 np0005549474 lvm[292600]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:24:26 np0005549474 lvm[292600]: VG ceph_vg0 finished
Dec  7 05:24:26 np0005549474 clever_snyder[292525]: {}
Dec  7 05:24:26 np0005549474 systemd[1]: libpod-6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57.scope: Deactivated successfully.
Dec  7 05:24:26 np0005549474 podman[292509]: 2025-12-07 10:24:26.226917781 +0000 UTC m=+0.925115814 container died 6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_snyder, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 05:24:26 np0005549474 systemd[1]: libpod-6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57.scope: Consumed 1.161s CPU time.
Dec  7 05:24:26 np0005549474 systemd[1]: var-lib-containers-storage-overlay-936e08cea466d5cd443541ba70d0bf7aa978354a047ca5bd6b43c716975db257-merged.mount: Deactivated successfully.
Dec  7 05:24:26 np0005549474 podman[292509]: 2025-12-07 10:24:26.262288535 +0000 UTC m=+0.960486568 container remove 6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:24:26 np0005549474 systemd[1]: libpod-conmon-6426861308b0285cf0c9eca0c28beae9808ec2c1981b9870cf784cf1f7318d57.scope: Deactivated successfully.
Dec  7 05:24:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:24:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:26 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:24:26 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:26 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:24:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:27.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:24:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:24:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:24:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:24:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:28.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:24:28 np0005549474 nova_compute[256753]: 2025-12-07 10:24:28.301 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:28 np0005549474 nova_compute[256753]: 2025-12-07 10:24:28.303 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:28 np0005549474 nova_compute[256753]: 2025-12-07 10:24:28.304 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:28 np0005549474 nova_compute[256753]: 2025-12-07 10:24:28.304 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:28 np0005549474 nova_compute[256753]: 2025-12-07 10:24:28.305 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:28 np0005549474 nova_compute[256753]: 2025-12-07 10:24:28.306 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:28.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:28.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:29 np0005549474 nova_compute[256753]: 2025-12-07 10:24:29.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:29 np0005549474 nova_compute[256753]: 2025-12-07 10:24:29.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:24:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:29] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:24:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:29] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:24:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:30.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec  7 05:24:31 np0005549474 nova_compute[256753]: 2025-12-07 10:24:31.755 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:31 np0005549474 nova_compute[256753]: 2025-12-07 10:24:31.756 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:32.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.307 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.309 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.309 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.309 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.360 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.362 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.781 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:24:33 np0005549474 nova_compute[256753]: 2025-12-07 10:24:33.782 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:24:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:34.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:34 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:24:34 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175994644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.247 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.434 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.435 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4447MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.436 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.436 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.511 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.512 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.532 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:24:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:34.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:34 np0005549474 nova_compute[256753]: 2025-12-07 10:24:34.997 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:24:35 np0005549474 nova_compute[256753]: 2025-12-07 10:24:35.003 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:24:35 np0005549474 nova_compute[256753]: 2025-12-07 10:24:35.021 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:24:35 np0005549474 nova_compute[256753]: 2025-12-07 10:24:35.023 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:24:35 np0005549474 nova_compute[256753]: 2025-12-07 10:24:35.023 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:24:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:36 np0005549474 nova_compute[256753]: 2025-12-07 10:24:36.018 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:36 np0005549474 nova_compute[256753]: 2025-12-07 10:24:36.019 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:36 np0005549474 nova_compute[256753]: 2025-12-07 10:24:36.019 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:36.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:37.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:37 np0005549474 nova_compute[256753]: 2025-12-07 10:24:37.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:37 np0005549474 nova_compute[256753]: 2025-12-07 10:24:37.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:24:37 np0005549474 nova_compute[256753]: 2025-12-07 10:24:37.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:24:37 np0005549474 nova_compute[256753]: 2025-12-07 10:24:37.790 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:24:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:38.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:38 np0005549474 nova_compute[256753]: 2025-12-07 10:24:38.363 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:38 np0005549474 nova_compute[256753]: 2025-12-07 10:24:38.365 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:38 np0005549474 nova_compute[256753]: 2025-12-07 10:24:38.365 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:38 np0005549474 nova_compute[256753]: 2025-12-07 10:24:38.366 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:38 np0005549474 nova_compute[256753]: 2025-12-07 10:24:38.403 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:38 np0005549474 nova_compute[256753]: 2025-12-07 10:24:38.405 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:24:38.635 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:24:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:24:38.635 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:24:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:24:38.635 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:24:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:38.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:38.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:39] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:24:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:39] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:24:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:40.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:42.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:24:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:24:42
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', '.nfs', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images']
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:24:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:24:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:24:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:43 np0005549474 nova_compute[256753]: 2025-12-07 10:24:43.406 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:44.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:44 np0005549474 ceph-mgr[74811]: [devicehealth INFO root] Check health
Dec  7 05:24:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:44 np0005549474 nova_compute[256753]: 2025-12-07 10:24:44.786 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:24:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:46.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:46.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:47.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:48.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:48 np0005549474 podman[292732]: 2025-12-07 10:24:48.269285082 +0000 UTC m=+0.082039677 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 05:24:48 np0005549474 podman[292733]: 2025-12-07 10:24:48.35796807 +0000 UTC m=+0.161932816 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:24:48 np0005549474 nova_compute[256753]: 2025-12-07 10:24:48.407 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:48.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:48.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:24:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:24:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:50.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:52.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:52.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:53 np0005549474 podman[292805]: 2025-12-07 10:24:53.261864344 +0000 UTC m=+0.070080792 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:24:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:53 np0005549474 nova_compute[256753]: 2025-12-07 10:24:53.410 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:53 np0005549474 nova_compute[256753]: 2025-12-07 10:24:53.412 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:53 np0005549474 nova_compute[256753]: 2025-12-07 10:24:53.412 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:53 np0005549474 nova_compute[256753]: 2025-12-07 10:24:53.412 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:53 np0005549474 nova_compute[256753]: 2025-12-07 10:24:53.449 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:53 np0005549474 nova_compute[256753]: 2025-12-07 10:24:53.450 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:24:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:24:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:24:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:24:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:24:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:56.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:56.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:57.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:24:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:57.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:24:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:24:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:24:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:24:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:24:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:24:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:24:58 np0005549474 nova_compute[256753]: 2025-12-07 10:24:58.452 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:58 np0005549474 nova_compute[256753]: 2025-12-07 10:24:58.453 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:24:58 np0005549474 nova_compute[256753]: 2025-12-07 10:24:58.453 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:24:58 np0005549474 nova_compute[256753]: 2025-12-07 10:24:58.453 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:58 np0005549474 nova_compute[256753]: 2025-12-07 10:24:58.482 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:24:58 np0005549474 nova_compute[256753]: 2025-12-07 10:24:58.482 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:24:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:24:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:24:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:58.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:24:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:58.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:24:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:24:58.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:24:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:24:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:59] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:24:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:24:59] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:25:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:00.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:00.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:02.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:25:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2005296220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:25:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:25:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2005296220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:25:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:03 np0005549474 nova_compute[256753]: 2025-12-07 10:25:03.483 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:03 np0005549474 nova_compute[256753]: 2025-12-07 10:25:03.485 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:03 np0005549474 nova_compute[256753]: 2025-12-07 10:25:03.485 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:03 np0005549474 nova_compute[256753]: 2025-12-07 10:25:03.485 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:03 np0005549474 nova_compute[256753]: 2025-12-07 10:25:03.529 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:03 np0005549474 nova_compute[256753]: 2025-12-07 10:25:03.530 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:04.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:06.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:07.237Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:08 np0005549474 nova_compute[256753]: 2025-12-07 10:25:08.530 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:08 np0005549474 nova_compute[256753]: 2025-12-07 10:25:08.532 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:08 np0005549474 nova_compute[256753]: 2025-12-07 10:25:08.533 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:08 np0005549474 nova_compute[256753]: 2025-12-07 10:25:08.533 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:08 np0005549474 nova_compute[256753]: 2025-12-07 10:25:08.565 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:08 np0005549474 nova_compute[256753]: 2025-12-07 10:25:08.565 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:08.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:08.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:09] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:25:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:09] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec  7 05:25:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:10.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=cleanup t=2025-12-07T10:25:11.665444905Z level=info msg="Completed cleanup jobs" duration=13.637192ms
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=grafana.update.checker t=2025-12-07T10:25:11.784981734Z level=info msg="Update check succeeded" duration=47.826484ms
Dec  7 05:25:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0[106493]: logger=plugins.update.checker t=2025-12-07T10:25:11.822863437Z level=info msg="Update check succeeded" duration=90.5831ms
Dec  7 05:25:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:25:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:25:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:25:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:25:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:25:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:25:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:25:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:25:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:12.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:13 np0005549474 nova_compute[256753]: 2025-12-07 10:25:13.567 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:13 np0005549474 nova_compute[256753]: 2025-12-07 10:25:13.594 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:13 np0005549474 nova_compute[256753]: 2025-12-07 10:25:13.594 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5028 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:13 np0005549474 nova_compute[256753]: 2025-12-07 10:25:13.595 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:13 np0005549474 nova_compute[256753]: 2025-12-07 10:25:13.597 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:13 np0005549474 nova_compute[256753]: 2025-12-07 10:25:13.598 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:14.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:14.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:16.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:16.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:17.238Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:25:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:17.238Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:25:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:17.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:25:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:18.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:18 np0005549474 nova_compute[256753]: 2025-12-07 10:25:18.598 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:18 np0005549474 nova_compute[256753]: 2025-12-07 10:25:18.600 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:18 np0005549474 nova_compute[256753]: 2025-12-07 10:25:18.600 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:18 np0005549474 nova_compute[256753]: 2025-12-07 10:25:18.600 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:18 np0005549474 nova_compute[256753]: 2025-12-07 10:25:18.645 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:18 np0005549474 nova_compute[256753]: 2025-12-07 10:25:18.646 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:18.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:18.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:19 np0005549474 podman[292879]: 2025-12-07 10:25:19.241792509 +0000 UTC m=+0.056293706 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:25:19 np0005549474 podman[292880]: 2025-12-07 10:25:19.272073684 +0000 UTC m=+0.085891592 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  7 05:25:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:25:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:25:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:20.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:20.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:25:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:22.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:25:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:23 np0005549474 nova_compute[256753]: 2025-12-07 10:25:23.647 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:23 np0005549474 nova_compute[256753]: 2025-12-07 10:25:23.649 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:23 np0005549474 nova_compute[256753]: 2025-12-07 10:25:23.649 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:23 np0005549474 nova_compute[256753]: 2025-12-07 10:25:23.650 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:23 np0005549474 nova_compute[256753]: 2025-12-07 10:25:23.650 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:23 np0005549474 nova_compute[256753]: 2025-12-07 10:25:23.651 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:24.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:24 np0005549474 podman[292930]: 2025-12-07 10:25:24.253507641 +0000 UTC m=+0.067106930 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:25:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:24.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:26 np0005549474 nova_compute[256753]: 2025-12-07 10:25:26.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:27.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:25:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:25:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 05:25:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 05:25:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:28.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:28 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 05:25:28 np0005549474 nova_compute[256753]: 2025-12-07 10:25:28.651 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:28 np0005549474 nova_compute[256753]: 2025-12-07 10:25:28.653 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:28 np0005549474 nova_compute[256753]: 2025-12-07 10:25:28.653 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:28 np0005549474 nova_compute[256753]: 2025-12-07 10:25:28.653 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:28 np0005549474 nova_compute[256753]: 2025-12-07 10:25:28.688 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:28 np0005549474 nova_compute[256753]: 2025-12-07 10:25:28.689 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:28.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 05:25:29 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 05:25:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:25:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 05:25:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:25:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:25:30 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:25:30 np0005549474 nova_compute[256753]: 2025-12-07 10:25:30.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:30 np0005549474 nova_compute[256753]: 2025-12-07 10:25:30.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:25:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:31.013128679 +0000 UTC m=+0.061717855 container create cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 05:25:31 np0005549474 systemd[1]: Started libpod-conmon-cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6.scope.
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:30.982598756 +0000 UTC m=+0.031188002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:25:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:31.110987886 +0000 UTC m=+0.159577072 container init cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_johnson, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:31.122396768 +0000 UTC m=+0.170985914 container start cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:31.125692938 +0000 UTC m=+0.174282134 container attach cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:25:31 np0005549474 youthful_johnson[293173]: 167 167
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:31.130024925 +0000 UTC m=+0.178614131 container died cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_johnson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:25:31 np0005549474 systemd[1]: libpod-cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6.scope: Deactivated successfully.
Dec  7 05:25:31 np0005549474 systemd[1]: var-lib-containers-storage-overlay-2452bdc2a4cc3c9a927c7c3b529d259ab27544de63e6bd8d762d6aac6c4425f0-merged.mount: Deactivated successfully.
Dec  7 05:25:31 np0005549474 podman[293157]: 2025-12-07 10:25:31.180240154 +0000 UTC m=+0.228829300 container remove cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_johnson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 05:25:31 np0005549474 systemd[1]: libpod-conmon-cd2b9ab493353a407461faf248f456c22cdfa57cc13ebd8d90a9f431730915f6.scope: Deactivated successfully.
Dec  7 05:25:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 05:25:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:25:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:31 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:25:31 np0005549474 podman[293199]: 2025-12-07 10:25:31.356444969 +0000 UTC m=+0.043715684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:25:31 np0005549474 podman[293199]: 2025-12-07 10:25:31.48083851 +0000 UTC m=+0.168109135 container create 5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_chatelet, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:25:31 np0005549474 systemd[1]: Started libpod-conmon-5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206.scope.
Dec  7 05:25:31 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:25:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9795390714b2c7f9b50d1a3f706e53171f1eced5c45d1cd29eb65e3e98f8582/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9795390714b2c7f9b50d1a3f706e53171f1eced5c45d1cd29eb65e3e98f8582/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9795390714b2c7f9b50d1a3f706e53171f1eced5c45d1cd29eb65e3e98f8582/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9795390714b2c7f9b50d1a3f706e53171f1eced5c45d1cd29eb65e3e98f8582/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:31 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9795390714b2c7f9b50d1a3f706e53171f1eced5c45d1cd29eb65e3e98f8582/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:31 np0005549474 podman[293199]: 2025-12-07 10:25:31.584689762 +0000 UTC m=+0.271960427 container init 5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_chatelet, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:25:31 np0005549474 podman[293199]: 2025-12-07 10:25:31.595223508 +0000 UTC m=+0.282494153 container start 5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 05:25:31 np0005549474 podman[293199]: 2025-12-07 10:25:31.599110605 +0000 UTC m=+0.286381240 container attach 5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_chatelet, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 05:25:31 np0005549474 nova_compute[256753]: 2025-12-07 10:25:31.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:31 np0005549474 bold_chatelet[293216]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:25:31 np0005549474 bold_chatelet[293216]: --> All data devices are unavailable
Dec  7 05:25:31 np0005549474 systemd[1]: libpod-5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206.scope: Deactivated successfully.
Dec  7 05:25:31 np0005549474 podman[293199]: 2025-12-07 10:25:31.970571392 +0000 UTC m=+0.657842087 container died 5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:25:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d9795390714b2c7f9b50d1a3f706e53171f1eced5c45d1cd29eb65e3e98f8582-merged.mount: Deactivated successfully.
Dec  7 05:25:32 np0005549474 podman[293199]: 2025-12-07 10:25:32.03024798 +0000 UTC m=+0.717518645 container remove 5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:25:32 np0005549474 systemd[1]: libpod-conmon-5fd8f5743f11e0e2fb1579831d9972239d38fa37ba8141fe5a83758332655206.scope: Deactivated successfully.
Dec  7 05:25:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.682301607 +0000 UTC m=+0.043574029 container create bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:25:32 np0005549474 systemd[1]: Started libpod-conmon-bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643.scope.
Dec  7 05:25:32 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:25:32 np0005549474 nova_compute[256753]: 2025-12-07 10:25:32.756 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.665647803 +0000 UTC m=+0.026920225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.771871599 +0000 UTC m=+0.133144051 container init bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.786354275 +0000 UTC m=+0.147626697 container start bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:25:32 np0005549474 sweet_bell[293351]: 167 167
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.789418438 +0000 UTC m=+0.150690960 container attach bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bell, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 05:25:32 np0005549474 systemd[1]: libpod-bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643.scope: Deactivated successfully.
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.791000431 +0000 UTC m=+0.152272843 container died bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:25:32 np0005549474 systemd[1]: var-lib-containers-storage-overlay-4cf9e52837d0b2ff2f781d326506b5ac0ad131b8a4bd118d20efd01ffedb819d-merged.mount: Deactivated successfully.
Dec  7 05:25:32 np0005549474 podman[293335]: 2025-12-07 10:25:32.836128491 +0000 UTC m=+0.197400903 container remove bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bell, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:25:32 np0005549474 systemd[1]: libpod-conmon-bd42ec92193f28789cdf031e8af44f7ed9436ed715e123d712053360c93f2643.scope: Deactivated successfully.
Dec  7 05:25:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:32.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:32 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.023786947 +0000 UTC m=+0.044663519 container create aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:25:33 np0005549474 systemd[1]: Started libpod-conmon-aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75.scope.
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.007482283 +0000 UTC m=+0.028358875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:25:33 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:25:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ab85c425d2dcae2c55fee10f6a787f9038d59a60a0e7ab8a283ea5570bf5b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ab85c425d2dcae2c55fee10f6a787f9038d59a60a0e7ab8a283ea5570bf5b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ab85c425d2dcae2c55fee10f6a787f9038d59a60a0e7ab8a283ea5570bf5b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:33 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ab85c425d2dcae2c55fee10f6a787f9038d59a60a0e7ab8a283ea5570bf5b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.128699738 +0000 UTC m=+0.149576360 container init aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.138669939 +0000 UTC m=+0.159546551 container start aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.14232446 +0000 UTC m=+0.163201072 container attach aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_golick, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 05:25:33 np0005549474 festive_golick[293392]: {
Dec  7 05:25:33 np0005549474 festive_golick[293392]:    "0": [
Dec  7 05:25:33 np0005549474 festive_golick[293392]:        {
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "devices": [
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "/dev/loop3"
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            ],
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "lv_name": "ceph_lv0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "lv_size": "21470642176",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "name": "ceph_lv0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "tags": {
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.cluster_name": "ceph",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.crush_device_class": "",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.encrypted": "0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.osd_id": "0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.type": "block",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.vdo": "0",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:                "ceph.with_tpm": "0"
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            },
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "type": "block",
Dec  7 05:25:33 np0005549474 festive_golick[293392]:            "vg_name": "ceph_vg0"
Dec  7 05:25:33 np0005549474 festive_golick[293392]:        }
Dec  7 05:25:33 np0005549474 festive_golick[293392]:    ]
Dec  7 05:25:33 np0005549474 festive_golick[293392]: }
Dec  7 05:25:33 np0005549474 systemd[1]: libpod-aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75.scope: Deactivated successfully.
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.461182482 +0000 UTC m=+0.482059044 container died aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:25:33 np0005549474 systemd[1]: var-lib-containers-storage-overlay-23ab85c425d2dcae2c55fee10f6a787f9038d59a60a0e7ab8a283ea5570bf5b0-merged.mount: Deactivated successfully.
Dec  7 05:25:33 np0005549474 podman[293376]: 2025-12-07 10:25:33.519611586 +0000 UTC m=+0.540488168 container remove aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 05:25:33 np0005549474 systemd[1]: libpod-conmon-aa350bb1579f85db7a2a58173ddc07adea0c37743e0cff8703ea09f455031a75.scope: Deactivated successfully.
Dec  7 05:25:33 np0005549474 nova_compute[256753]: 2025-12-07 10:25:33.690 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:33 np0005549474 nova_compute[256753]: 2025-12-07 10:25:33.692 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:34.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.254045089 +0000 UTC m=+0.044622087 container create e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:25:34 np0005549474 systemd[1]: Started libpod-conmon-e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69.scope.
Dec  7 05:25:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.319508415 +0000 UTC m=+0.110085493 container init e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.325468237 +0000 UTC m=+0.116045245 container start e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.328308375 +0000 UTC m=+0.118885463 container attach e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 05:25:34 np0005549474 youthful_brahmagupta[293523]: 167 167
Dec  7 05:25:34 np0005549474 systemd[1]: libpod-e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69.scope: Deactivated successfully.
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.235768962 +0000 UTC m=+0.026346000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.330505945 +0000 UTC m=+0.121082963 container died e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 05:25:34 np0005549474 systemd[1]: var-lib-containers-storage-overlay-06d77a75331b93c1c5a62e791404edd859a41ff4a23ae4627f8966d84d7a28f9-merged.mount: Deactivated successfully.
Dec  7 05:25:34 np0005549474 podman[293507]: 2025-12-07 10:25:34.364605284 +0000 UTC m=+0.155182282 container remove e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 05:25:34 np0005549474 systemd[1]: libpod-conmon-e3e27efb628c054315a6ddc076d2eb97bd6b74698aca5336e3100873b9dbba69.scope: Deactivated successfully.
Dec  7 05:25:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec  7 05:25:34 np0005549474 podman[293546]: 2025-12-07 10:25:34.532154203 +0000 UTC m=+0.043032685 container create 61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:25:34 np0005549474 systemd[1]: Started libpod-conmon-61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d.scope.
Dec  7 05:25:34 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:25:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bbf0abd3d756f11dc5f9f9561b73763ad12145c42cc7177f72fbc96628e334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bbf0abd3d756f11dc5f9f9561b73763ad12145c42cc7177f72fbc96628e334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bbf0abd3d756f11dc5f9f9561b73763ad12145c42cc7177f72fbc96628e334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:34 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bbf0abd3d756f11dc5f9f9561b73763ad12145c42cc7177f72fbc96628e334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:25:34 np0005549474 podman[293546]: 2025-12-07 10:25:34.600168827 +0000 UTC m=+0.111047309 container init 61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:25:34 np0005549474 podman[293546]: 2025-12-07 10:25:34.610960461 +0000 UTC m=+0.121838943 container start 61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:25:34 np0005549474 podman[293546]: 2025-12-07 10:25:34.516604258 +0000 UTC m=+0.027482770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:25:34 np0005549474 podman[293546]: 2025-12-07 10:25:34.614093056 +0000 UTC m=+0.124971538 container attach 61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 05:25:34 np0005549474 nova_compute[256753]: 2025-12-07 10:25:34.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:34.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:35 np0005549474 lvm[293638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:25:35 np0005549474 lvm[293638]: VG ceph_vg0 finished
Dec  7 05:25:35 np0005549474 nice_carver[293563]: {}
Dec  7 05:25:35 np0005549474 systemd[1]: libpod-61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d.scope: Deactivated successfully.
Dec  7 05:25:35 np0005549474 systemd[1]: libpod-61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d.scope: Consumed 1.099s CPU time.
Dec  7 05:25:35 np0005549474 podman[293546]: 2025-12-07 10:25:35.380906763 +0000 UTC m=+0.891785285 container died 61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 05:25:35 np0005549474 systemd[1]: var-lib-containers-storage-overlay-01bbf0abd3d756f11dc5f9f9561b73763ad12145c42cc7177f72fbc96628e334-merged.mount: Deactivated successfully.
Dec  7 05:25:35 np0005549474 podman[293546]: 2025-12-07 10:25:35.427885214 +0000 UTC m=+0.938763726 container remove 61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:25:35 np0005549474 systemd[1]: libpod-conmon-61235b286c9fbb365d121cd3ac565758b1ecaf7ec24e49395a779c0b807bf37d.scope: Deactivated successfully.
Dec  7 05:25:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:25:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:35 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:25:35 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.748 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.782 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.782 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:25:35 np0005549474 nova_compute[256753]: 2025-12-07 10:25:35.782 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:25:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:25:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299295895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.268 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:25:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.456 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.457 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4448MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.457 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.458 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:25:36 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:36 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.566 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.567 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:25:36 np0005549474 nova_compute[256753]: 2025-12-07 10:25:36.582 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:25:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:25:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:25:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:25:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147241048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:25:37 np0005549474 nova_compute[256753]: 2025-12-07 10:25:37.017 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:25:37 np0005549474 nova_compute[256753]: 2025-12-07 10:25:37.026 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:25:37 np0005549474 nova_compute[256753]: 2025-12-07 10:25:37.042 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:25:37 np0005549474 nova_compute[256753]: 2025-12-07 10:25:37.046 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:25:37 np0005549474 nova_compute[256753]: 2025-12-07 10:25:37.046 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:25:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:37.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:38 np0005549474 nova_compute[256753]: 2025-12-07 10:25:38.048 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:25:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:25:38.635 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:25:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:25:38.636 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:25:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:25:38.636 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:25:38 np0005549474 nova_compute[256753]: 2025-12-07 10:25:38.694 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:38 np0005549474 nova_compute[256753]: 2025-12-07 10:25:38.697 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:38.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:38.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:39 np0005549474 nova_compute[256753]: 2025-12-07 10:25:39.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:25:39 np0005549474 nova_compute[256753]: 2025-12-07 10:25:39.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:25:39 np0005549474 nova_compute[256753]: 2025-12-07 10:25:39.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:25:39 np0005549474 nova_compute[256753]: 2025-12-07 10:25:39.779 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:25:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:25:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:25:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:40.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:25:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:40.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:42.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:25:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:25:42
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'vms', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:25:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:25:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:42.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:25:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:25:43 np0005549474 nova_compute[256753]: 2025-12-07 10:25:43.696 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:43 np0005549474 nova_compute[256753]: 2025-12-07 10:25:43.698 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:44.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:44.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:46.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:46.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:47.242Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:25:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:47.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:25:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:48 np0005549474 nova_compute[256753]: 2025-12-07 10:25:48.699 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:48 np0005549474 nova_compute[256753]: 2025-12-07 10:25:48.702 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:48.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:49] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:25:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:49] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:25:50 np0005549474 podman[293763]: 2025-12-07 10:25:50.222012566 +0000 UTC m=+0.066080493 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:25:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:50 np0005549474 podman[293764]: 2025-12-07 10:25:50.243766579 +0000 UTC m=+0.079637802 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Dec  7 05:25:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:25:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:50.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:25:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  7 05:25:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  7 05:25:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:53 np0005549474 nova_compute[256753]: 2025-12-07 10:25:53.701 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:53 np0005549474 nova_compute[256753]: 2025-12-07 10:25:53.704 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:25:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:55 np0005549474 podman[293814]: 2025-12-07 10:25:55.280493054 +0000 UTC m=+0.088009181 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  7 05:25:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:25:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:25:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:25:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:25:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:25:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:57.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:25:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:25:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:25:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:25:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:25:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:25:58 np0005549474 nova_compute[256753]: 2025-12-07 10:25:58.705 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:58 np0005549474 nova_compute[256753]: 2025-12-07 10:25:58.707 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:25:58 np0005549474 nova_compute[256753]: 2025-12-07 10:25:58.707 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:25:58 np0005549474 nova_compute[256753]: 2025-12-07 10:25:58.707 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:58 np0005549474 nova_compute[256753]: 2025-12-07 10:25:58.725 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:25:58 np0005549474 nova_compute[256753]: 2025-12-07 10:25:58.725 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:25:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:25:58.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:25:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:25:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:25:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:25:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:25:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:25:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:26:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:26:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26774792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:26:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:26:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26774792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:26:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:03 np0005549474 nova_compute[256753]: 2025-12-07 10:26:03.726 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:26:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:26:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:07.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:26:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:07.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:26:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:07.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:26:07 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:08.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:08 np0005549474 nova_compute[256753]: 2025-12-07 10:26:08.727 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:08 np0005549474 nova_compute[256753]: 2025-12-07 10:26:08.728 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:08 np0005549474 nova_compute[256753]: 2025-12-07 10:26:08.729 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:08 np0005549474 nova_compute[256753]: 2025-12-07 10:26:08.729 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:08 np0005549474 nova_compute[256753]: 2025-12-07 10:26:08.763 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:08 np0005549474 nova_compute[256753]: 2025-12-07 10:26:08.764 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:08.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:26:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec  7 05:26:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:10.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:11.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:26:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:26:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:26:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:26:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:13 np0005549474 nova_compute[256753]: 2025-12-07 10:26:13.765 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:13 np0005549474 nova_compute[256753]: 2025-12-07 10:26:13.766 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:13 np0005549474 nova_compute[256753]: 2025-12-07 10:26:13.767 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:13 np0005549474 nova_compute[256753]: 2025-12-07 10:26:13.767 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:13 np0005549474 nova_compute[256753]: 2025-12-07 10:26:13.810 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:13 np0005549474 nova_compute[256753]: 2025-12-07 10:26:13.810 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:14.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:16.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:17.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:17.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:17 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:18 np0005549474 nova_compute[256753]: 2025-12-07 10:26:18.811 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:18 np0005549474 nova_compute[256753]: 2025-12-07 10:26:18.813 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:18 np0005549474 nova_compute[256753]: 2025-12-07 10:26:18.813 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:18 np0005549474 nova_compute[256753]: 2025-12-07 10:26:18.813 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:18 np0005549474 nova_compute[256753]: 2025-12-07 10:26:18.863 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:18 np0005549474 nova_compute[256753]: 2025-12-07 10:26:18.864 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:18.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:19.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.568958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179569016, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1725, "num_deletes": 251, "total_data_size": 3316410, "memory_usage": 3377184, "flush_reason": "Manual Compaction"}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179588362, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3251062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35323, "largest_seqno": 37046, "table_properties": {"data_size": 3243077, "index_size": 4863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16602, "raw_average_key_size": 20, "raw_value_size": 3227081, "raw_average_value_size": 3949, "num_data_blocks": 209, "num_entries": 817, "num_filter_entries": 817, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765103008, "oldest_key_time": 1765103008, "file_creation_time": 1765103179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 19451 microseconds, and 6196 cpu microseconds.
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.588410) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3251062 bytes OK
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.588440) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.590348) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.590365) EVENT_LOG_v1 {"time_micros": 1765103179590359, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.590387) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3309196, prev total WAL file size 3309196, number of live WAL files 2.
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.591463) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3174KB)], [77(11MB)]
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179591495, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15315141, "oldest_snapshot_seqno": -1}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6835 keys, 13128741 bytes, temperature: kUnknown
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179699306, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13128741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13086089, "index_size": 24492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179486, "raw_average_key_size": 26, "raw_value_size": 12965869, "raw_average_value_size": 1896, "num_data_blocks": 958, "num_entries": 6835, "num_filter_entries": 6835, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765103179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.699591) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13128741 bytes
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.700910) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 11.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 7355, records dropped: 520 output_compression: NoCompression
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.700931) EVENT_LOG_v1 {"time_micros": 1765103179700921, "job": 44, "event": "compaction_finished", "compaction_time_micros": 107918, "compaction_time_cpu_micros": 38278, "output_level": 6, "num_output_files": 1, "total_output_size": 13128741, "num_input_records": 7355, "num_output_records": 6835, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179701949, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179704883, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.591384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.704976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.704981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.704983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.704984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:26:19 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:26:19.704986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:26:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:19] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  7 05:26:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:19] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  7 05:26:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:21 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:21 np0005549474 podman[293884]: 2025-12-07 10:26:21.273422719 +0000 UTC m=+0.080544037 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  7 05:26:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:21 np0005549474 podman[293885]: 2025-12-07 10:26:21.352049753 +0000 UTC m=+0.156295853 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 05:26:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:22 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:23.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:23 np0005549474 nova_compute[256753]: 2025-12-07 10:26:23.865 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:24.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:25.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:26 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:26 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:26 np0005549474 podman[293932]: 2025-12-07 10:26:26.238815448 +0000 UTC m=+0.057359494 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:26:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:27.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:27.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:26:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:26:27 np0005549474 nova_compute[256753]: 2025-12-07 10:26:27.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:28.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:28 np0005549474 nova_compute[256753]: 2025-12-07 10:26:28.867 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:28 np0005549474 nova_compute[256753]: 2025-12-07 10:26:28.869 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:28 np0005549474 nova_compute[256753]: 2025-12-07 10:26:28.869 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:28 np0005549474 nova_compute[256753]: 2025-12-07 10:26:28.869 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:28 np0005549474 nova_compute[256753]: 2025-12-07 10:26:28.888 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:28 np0005549474 nova_compute[256753]: 2025-12-07 10:26:28.889 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:28.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:29] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:26:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:29] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:26:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:30.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:30 np0005549474 nova_compute[256753]: 2025-12-07 10:26:30.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:30 np0005549474 nova_compute[256753]: 2025-12-07 10:26:30.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:26:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:31 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:31 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:31 np0005549474 nova_compute[256753]: 2025-12-07 10:26:31.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:32 np0005549474 nova_compute[256753]: 2025-12-07 10:26:32.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:33.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:33 np0005549474 nova_compute[256753]: 2025-12-07 10:26:33.890 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:35.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:35 np0005549474 nova_compute[256753]: 2025-12-07 10:26:35.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:35 np0005549474 nova_compute[256753]: 2025-12-07 10:26:35.780 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:26:35 np0005549474 nova_compute[256753]: 2025-12-07 10:26:35.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:26:35 np0005549474 nova_compute[256753]: 2025-12-07 10:26:35.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:26:35 np0005549474 nova_compute[256753]: 2025-12-07 10:26:35.782 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:26:35 np0005549474 nova_compute[256753]: 2025-12-07 10:26:35.782 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:26:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:36 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:36 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:36.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518036194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.325 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:26:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.505 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.508 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4483MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.508 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.508 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.605 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.605 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:26:36 np0005549474 nova_compute[256753]: 2025-12-07 10:26:36.631 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:26:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:26:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:26:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:26:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239165462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:26:37 np0005549474 nova_compute[256753]: 2025-12-07 10:26:37.084 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:26:37 np0005549474 nova_compute[256753]: 2025-12-07 10:26:37.093 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:26:37 np0005549474 nova_compute[256753]: 2025-12-07 10:26:37.110 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:26:37 np0005549474 nova_compute[256753]: 2025-12-07 10:26:37.112 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:26:37 np0005549474 nova_compute[256753]: 2025-12-07 10:26:37.112 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:26:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:37.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:37.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.400855747 +0000 UTC m=+0.060014238 container create 0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dijkstra, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 05:26:37 np0005549474 systemd[1]: Started libpod-conmon-0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f.scope.
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.371987819 +0000 UTC m=+0.031146320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:26:37 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.492749482 +0000 UTC m=+0.151907983 container init 0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.505916191 +0000 UTC m=+0.165074642 container start 0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.509287612 +0000 UTC m=+0.168446083 container attach 0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:26:37 np0005549474 eloquent_dijkstra[294225]: 167 167
Dec  7 05:26:37 np0005549474 systemd[1]: libpod-0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f.scope: Deactivated successfully.
Dec  7 05:26:37 np0005549474 conmon[294225]: conmon 0c2ac29e1b14ff9db287 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f.scope/container/memory.events
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.513743474 +0000 UTC m=+0.172901925 container died 0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:26:37 np0005549474 systemd[1]: var-lib-containers-storage-overlay-079a2787d9878743ac58a43b64ff9af7be83cfb45872a2b4d261065339b7074b-merged.mount: Deactivated successfully.
Dec  7 05:26:37 np0005549474 podman[294208]: 2025-12-07 10:26:37.550460965 +0000 UTC m=+0.209619416 container remove 0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:26:37 np0005549474 systemd[1]: libpod-conmon-0c2ac29e1b14ff9db28765d7c0c14dbfcf2238256eb96b6860f11558052edb2f.scope: Deactivated successfully.
Dec  7 05:26:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:26:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:37 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:26:37 np0005549474 podman[294249]: 2025-12-07 10:26:37.725010274 +0000 UTC m=+0.034225474 container create 5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:26:37 np0005549474 systemd[1]: Started libpod-conmon-5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8.scope.
Dec  7 05:26:37 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:26:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef81032d9852cce3b9f9b3d085fca9c92ec37f87bb1923c58a10be8eef68f5da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef81032d9852cce3b9f9b3d085fca9c92ec37f87bb1923c58a10be8eef68f5da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef81032d9852cce3b9f9b3d085fca9c92ec37f87bb1923c58a10be8eef68f5da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef81032d9852cce3b9f9b3d085fca9c92ec37f87bb1923c58a10be8eef68f5da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:37 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef81032d9852cce3b9f9b3d085fca9c92ec37f87bb1923c58a10be8eef68f5da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:37 np0005549474 podman[294249]: 2025-12-07 10:26:37.793838521 +0000 UTC m=+0.103053751 container init 5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:26:37 np0005549474 podman[294249]: 2025-12-07 10:26:37.803578246 +0000 UTC m=+0.112793446 container start 5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 05:26:37 np0005549474 podman[294249]: 2025-12-07 10:26:37.710314813 +0000 UTC m=+0.019530033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:26:37 np0005549474 podman[294249]: 2025-12-07 10:26:37.80701161 +0000 UTC m=+0.116226810 container attach 5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:26:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.107 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.109 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.110 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:38 np0005549474 focused_davinci[294265]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:26:38 np0005549474 focused_davinci[294265]: --> All data devices are unavailable
Dec  7 05:26:38 np0005549474 systemd[1]: libpod-5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8.scope: Deactivated successfully.
Dec  7 05:26:38 np0005549474 podman[294249]: 2025-12-07 10:26:38.195544893 +0000 UTC m=+0.504760093 container died 5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_davinci, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:26:38 np0005549474 systemd[1]: var-lib-containers-storage-overlay-ef81032d9852cce3b9f9b3d085fca9c92ec37f87bb1923c58a10be8eef68f5da-merged.mount: Deactivated successfully.
Dec  7 05:26:38 np0005549474 podman[294249]: 2025-12-07 10:26:38.246950975 +0000 UTC m=+0.556166185 container remove 5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_davinci, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 05:26:38 np0005549474 systemd[1]: libpod-conmon-5dba66c90ae2f23f504e9a0bdf3e4d825bd9e88b7798e819349c347db5743cb8.scope: Deactivated successfully.
Dec  7 05:26:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:38.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:26:38.637 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:26:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:26:38.637 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:26:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:26:38.638 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:26:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.892 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.895 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.895 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.895 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:38 np0005549474 podman[294384]: 2025-12-07 10:26:38.963079509 +0000 UTC m=+0.051895305 container create 8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 05:26:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:38.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:26:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:38.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:26:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:38.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.990 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:38 np0005549474 nova_compute[256753]: 2025-12-07 10:26:38.991 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:39 np0005549474 systemd[1]: Started libpod-conmon-8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57.scope.
Dec  7 05:26:39 np0005549474 podman[294384]: 2025-12-07 10:26:38.940485613 +0000 UTC m=+0.029301489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:26:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:26:39 np0005549474 podman[294384]: 2025-12-07 10:26:39.066290843 +0000 UTC m=+0.155106669 container init 8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_leakey, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 05:26:39 np0005549474 podman[294384]: 2025-12-07 10:26:39.074515458 +0000 UTC m=+0.163331254 container start 8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 05:26:39 np0005549474 podman[294384]: 2025-12-07 10:26:39.077972172 +0000 UTC m=+0.166788048 container attach 8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:26:39 np0005549474 determined_leakey[294401]: 167 167
Dec  7 05:26:39 np0005549474 systemd[1]: libpod-8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57.scope: Deactivated successfully.
Dec  7 05:26:39 np0005549474 podman[294384]: 2025-12-07 10:26:39.081958981 +0000 UTC m=+0.170774797 container died 8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 05:26:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-1958d6a5d8180f23d095e647e9d8d44ff7a5d274350790eb62d6cbd536c2f54f-merged.mount: Deactivated successfully.
Dec  7 05:26:39 np0005549474 podman[294384]: 2025-12-07 10:26:39.131519981 +0000 UTC m=+0.220335797 container remove 8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 05:26:39 np0005549474 systemd[1]: libpod-conmon-8994bd1d1ff9e8e751e4c7d2dcef89b41fbe5d5007d25798f0feb4257e916a57.scope: Deactivated successfully.
Dec  7 05:26:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.372374228 +0000 UTC m=+0.061355253 container create 7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:26:39 np0005549474 systemd[1]: Started libpod-conmon-7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0.scope.
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.345833785 +0000 UTC m=+0.034814850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:26:39 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:26:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0880326b826b290517194d81cfef43142494cef6bc69a92abc495f7da7b80467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0880326b826b290517194d81cfef43142494cef6bc69a92abc495f7da7b80467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0880326b826b290517194d81cfef43142494cef6bc69a92abc495f7da7b80467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:39 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0880326b826b290517194d81cfef43142494cef6bc69a92abc495f7da7b80467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.464781208 +0000 UTC m=+0.153762243 container init 7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.474904644 +0000 UTC m=+0.163885659 container start 7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_leakey, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.478148652 +0000 UTC m=+0.167129667 container attach 7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:26:39 np0005549474 nova_compute[256753]: 2025-12-07 10:26:39.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:39 np0005549474 nova_compute[256753]: 2025-12-07 10:26:39.756 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:26:39 np0005549474 nova_compute[256753]: 2025-12-07 10:26:39.757 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:26:39 np0005549474 eager_leakey[294440]: {
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:    "0": [
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:        {
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "devices": [
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "/dev/loop3"
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            ],
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "lv_name": "ceph_lv0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "lv_size": "21470642176",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "name": "ceph_lv0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "tags": {
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.cluster_name": "ceph",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.crush_device_class": "",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.encrypted": "0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.osd_id": "0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.type": "block",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.vdo": "0",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:                "ceph.with_tpm": "0"
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            },
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "type": "block",
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:            "vg_name": "ceph_vg0"
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:        }
Dec  7 05:26:39 np0005549474 eager_leakey[294440]:    ]
Dec  7 05:26:39 np0005549474 eager_leakey[294440]: }
Dec  7 05:26:39 np0005549474 nova_compute[256753]: 2025-12-07 10:26:39.779 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:26:39 np0005549474 systemd[1]: libpod-7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0.scope: Deactivated successfully.
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.822557893 +0000 UTC m=+0.511538918 container died 7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:26:39 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0880326b826b290517194d81cfef43142494cef6bc69a92abc495f7da7b80467-merged.mount: Deactivated successfully.
Dec  7 05:26:39 np0005549474 podman[294424]: 2025-12-07 10:26:39.890655889 +0000 UTC m=+0.579636914 container remove 7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Dec  7 05:26:39 np0005549474 systemd[1]: libpod-conmon-7eee23e3562d22371a68421ef01794768574d581808b61fb3655e8f2fb49cfd0.scope: Deactivated successfully.
Dec  7 05:26:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:39] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:26:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:39] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec  7 05:26:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.711763817 +0000 UTC m=+0.049165152 container create 150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Dec  7 05:26:40 np0005549474 systemd[1]: Started libpod-conmon-150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab.scope.
Dec  7 05:26:40 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.691965127 +0000 UTC m=+0.029366492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:26:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.79516735 +0000 UTC m=+0.132568695 container init 150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.805606565 +0000 UTC m=+0.143007920 container start 150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_jackson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.810156899 +0000 UTC m=+0.147558284 container attach 150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 05:26:40 np0005549474 relaxed_jackson[294570]: 167 167
Dec  7 05:26:40 np0005549474 systemd[1]: libpod-150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab.scope: Deactivated successfully.
Dec  7 05:26:40 np0005549474 conmon[294570]: conmon 150b300b85d660deee3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab.scope/container/memory.events
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.814617291 +0000 UTC m=+0.152018656 container died 150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:26:40 np0005549474 systemd[1]: var-lib-containers-storage-overlay-c50c18b9a2d65956eca3a56949631e8f5daa6205bc24c818433f4e6b6c3cee5f-merged.mount: Deactivated successfully.
Dec  7 05:26:40 np0005549474 podman[294554]: 2025-12-07 10:26:40.86080954 +0000 UTC m=+0.198210905 container remove 150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_jackson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 05:26:40 np0005549474 systemd[1]: libpod-conmon-150b300b85d660deee3f3419c365fa88fcb046e890c592bc184a29683f5a07ab.scope: Deactivated successfully.
Dec  7 05:26:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:41 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:41 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:41 np0005549474 podman[294594]: 2025-12-07 10:26:41.08858603 +0000 UTC m=+0.063503542 container create bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bhabha, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:26:41 np0005549474 systemd[1]: Started libpod-conmon-bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80.scope.
Dec  7 05:26:41 np0005549474 podman[294594]: 2025-12-07 10:26:41.056510796 +0000 UTC m=+0.031428368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:26:41 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:26:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab09801f7e8242a625d6a54c89e26daa6a5a3b098c5c96131e27355dd27554e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab09801f7e8242a625d6a54c89e26daa6a5a3b098c5c96131e27355dd27554e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab09801f7e8242a625d6a54c89e26daa6a5a3b098c5c96131e27355dd27554e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:41 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab09801f7e8242a625d6a54c89e26daa6a5a3b098c5c96131e27355dd27554e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:26:41 np0005549474 podman[294594]: 2025-12-07 10:26:41.193693337 +0000 UTC m=+0.168610859 container init bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bhabha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:26:41 np0005549474 podman[294594]: 2025-12-07 10:26:41.206479305 +0000 UTC m=+0.181396827 container start bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:26:41 np0005549474 podman[294594]: 2025-12-07 10:26:41.211351918 +0000 UTC m=+0.186269440 container attach bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 05:26:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:41 np0005549474 lvm[294685]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:26:41 np0005549474 lvm[294685]: VG ceph_vg0 finished
Dec  7 05:26:41 np0005549474 great_bhabha[294610]: {}
Dec  7 05:26:42 np0005549474 systemd[1]: libpod-bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80.scope: Deactivated successfully.
Dec  7 05:26:42 np0005549474 systemd[1]: libpod-bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80.scope: Consumed 1.407s CPU time.
Dec  7 05:26:42 np0005549474 podman[294688]: 2025-12-07 10:26:42.058255048 +0000 UTC m=+0.034770119 container died bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:26:42 np0005549474 systemd[1]: var-lib-containers-storage-overlay-bab09801f7e8242a625d6a54c89e26daa6a5a3b098c5c96131e27355dd27554e-merged.mount: Deactivated successfully.
Dec  7 05:26:42 np0005549474 podman[294688]: 2025-12-07 10:26:42.097933079 +0000 UTC m=+0.074448130 container remove bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:26:42 np0005549474 systemd[1]: libpod-conmon-bcd151611a50cd455206537be4b4e6749084029c99ca3d2f87a1b83dcbe51c80.scope: Deactivated successfully.
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:42.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:26:42
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', '.nfs', 'images', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta']
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:42 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:26:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:26:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:26:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:26:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:43 np0005549474 nova_compute[256753]: 2025-12-07 10:26:43.992 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:43 np0005549474 nova_compute[256753]: 2025-12-07 10:26:43.994 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:43 np0005549474 nova_compute[256753]: 2025-12-07 10:26:43.995 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:43 np0005549474 nova_compute[256753]: 2025-12-07 10:26:43.995 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:44 np0005549474 nova_compute[256753]: 2025-12-07 10:26:44.031 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:44 np0005549474 nova_compute[256753]: 2025-12-07 10:26:44.032 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:44.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:26:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:45 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:46 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:46.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:26:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:47.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:47.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:48.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:48 np0005549474 nova_compute[256753]: 2025-12-07 10:26:48.773 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:26:48 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:48.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:49 np0005549474 nova_compute[256753]: 2025-12-07 10:26:49.032 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:49 np0005549474 nova_compute[256753]: 2025-12-07 10:26:49.034 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:49 np0005549474 nova_compute[256753]: 2025-12-07 10:26:49.035 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:49 np0005549474 nova_compute[256753]: 2025-12-07 10:26:49.035 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:49 np0005549474 nova_compute[256753]: 2025-12-07 10:26:49.072 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:49 np0005549474 nova_compute[256753]: 2025-12-07 10:26:49.073 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:26:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:26:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:50 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:50 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:51 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:51 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:51.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:52 np0005549474 podman[294762]: 2025-12-07 10:26:52.301107896 +0000 UTC m=+0.102658450 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  7 05:26:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:52 np0005549474 podman[294763]: 2025-12-07 10:26:52.369016548 +0000 UTC m=+0.167119608 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  7 05:26:52 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:26:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:53.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:26:54 np0005549474 nova_compute[256753]: 2025-12-07 10:26:54.074 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:54 np0005549474 nova_compute[256753]: 2025-12-07 10:26:54.075 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:54 np0005549474 nova_compute[256753]: 2025-12-07 10:26:54.076 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:54 np0005549474 nova_compute[256753]: 2025-12-07 10:26:54.076 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:54 np0005549474 nova_compute[256753]: 2025-12-07 10:26:54.077 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:54 np0005549474 nova_compute[256753]: 2025-12-07 10:26:54.079 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:54 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:55.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:26:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:26:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:55 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:26:56 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:26:56 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:26:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:26:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:57.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:26:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:57.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:26:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:57.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:57 np0005549474 podman[294814]: 2025-12-07 10:26:57.279445117 +0000 UTC m=+0.084805193 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Dec  7 05:26:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:26:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:26:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:26:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:26:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:26:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:26:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:26:58.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:26:59 np0005549474 nova_compute[256753]: 2025-12-07 10:26:59.081 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:59 np0005549474 nova_compute[256753]: 2025-12-07 10:26:59.083 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:26:59 np0005549474 nova_compute[256753]: 2025-12-07 10:26:59.083 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:26:59 np0005549474 nova_compute[256753]: 2025-12-07 10:26:59.084 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:59 np0005549474 nova_compute[256753]: 2025-12-07 10:26:59.115 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:26:59 np0005549474 nova_compute[256753]: 2025-12-07 10:26:59.116 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:26:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:26:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:26:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:26:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:26:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:26:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:27:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:00 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:00 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:01 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:01 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:02.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:02 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:27:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:27:04 np0005549474 nova_compute[256753]: 2025-12-07 10:27:04.117 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:04 np0005549474 nova_compute[256753]: 2025-12-07 10:27:04.118 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:04 np0005549474 nova_compute[256753]: 2025-12-07 10:27:04.118 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:27:04 np0005549474 nova_compute[256753]: 2025-12-07 10:27:04.119 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:04 np0005549474 nova_compute[256753]: 2025-12-07 10:27:04.175 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:04 np0005549474 nova_compute[256753]: 2025-12-07 10:27:04.175 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:04.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:04 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:05 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:06 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:06 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:06.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:06 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:07.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:07.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:08.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:08 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:08.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:27:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:08.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:27:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:08.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:27:09 np0005549474 nova_compute[256753]: 2025-12-07 10:27:09.202 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:27:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:27:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:10 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:10 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:11 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:11 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:27:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:27:12 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:13.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:14 np0005549474 nova_compute[256753]: 2025-12-07 10:27:14.204 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:14 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:15 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:16 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:16 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:16 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:17.254Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:27:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:17.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:18.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:18 np0005549474 nova_compute[256753]: 2025-12-07 10:27:18.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:18 np0005549474 nova_compute[256753]: 2025-12-07 10:27:18.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  7 05:27:18 np0005549474 nova_compute[256753]: 2025-12-07 10:27:18.769 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  7 05:27:18 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:18.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:27:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:18.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:19 np0005549474 nova_compute[256753]: 2025-12-07 10:27:19.206 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:19 np0005549474 nova_compute[256753]: 2025-12-07 10:27:19.208 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:19 np0005549474 nova_compute[256753]: 2025-12-07 10:27:19.208 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:27:19 np0005549474 nova_compute[256753]: 2025-12-07 10:27:19.208 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:19 np0005549474 nova_compute[256753]: 2025-12-07 10:27:19.254 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:19 np0005549474 nova_compute[256753]: 2025-12-07 10:27:19.255 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:20.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:20 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:21 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:20 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:22.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:22 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:23 np0005549474 podman[294892]: 2025-12-07 10:27:23.256317501 +0000 UTC m=+0.070786482 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  7 05:27:23 np0005549474 podman[294893]: 2025-12-07 10:27:23.29625977 +0000 UTC m=+0.097954512 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  7 05:27:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.255 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.257 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.257 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.258 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.289 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.290 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:24 np0005549474 nova_compute[256753]: 2025-12-07 10:27:24.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:24 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:25 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:25 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:26.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:26 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:27.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:27:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:27:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:27 np0005549474 nova_compute[256753]: 2025-12-07 10:27:27.795 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:27 np0005549474 nova_compute[256753]: 2025-12-07 10:27:27.795 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  7 05:27:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:28 np0005549474 podman[294945]: 2025-12-07 10:27:28.248065547 +0000 UTC m=+0.060562401 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:27:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:28 np0005549474 nova_compute[256753]: 2025-12-07 10:27:28.776 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:28 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:28.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:27:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:28.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:29 np0005549474 nova_compute[256753]: 2025-12-07 10:27:29.291 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:29 np0005549474 nova_compute[256753]: 2025-12-07 10:27:29.293 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:29 np0005549474 nova_compute[256753]: 2025-12-07 10:27:29.293 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:27:29 np0005549474 nova_compute[256753]: 2025-12-07 10:27:29.294 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:29 np0005549474 nova_compute[256753]: 2025-12-07 10:27:29.294 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:29 np0005549474 nova_compute[256753]: 2025-12-07 10:27:29.296 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:29.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:30 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:30 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:30 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:31.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:32.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:32 np0005549474 nova_compute[256753]: 2025-12-07 10:27:32.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:32 np0005549474 nova_compute[256753]: 2025-12-07 10:27:32.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:27:32 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:33 np0005549474 nova_compute[256753]: 2025-12-07 10:27:33.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:34 np0005549474 nova_compute[256753]: 2025-12-07 10:27:34.294 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:34 np0005549474 nova_compute[256753]: 2025-12-07 10:27:34.297 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:27:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:27:34 np0005549474 nova_compute[256753]: 2025-12-07 10:27:34.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:34 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:35 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:35 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:35 np0005549474 nova_compute[256753]: 2025-12-07 10:27:35.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:35 np0005549474 nova_compute[256753]: 2025-12-07 10:27:35.780 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:27:35 np0005549474 nova_compute[256753]: 2025-12-07 10:27:35.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:27:35 np0005549474 nova_compute[256753]: 2025-12-07 10:27:35.781 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:27:35 np0005549474 nova_compute[256753]: 2025-12-07 10:27:35.781 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:27:35 np0005549474 nova_compute[256753]: 2025-12-07 10:27:35.781 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:27:36 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:27:36 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661855641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.273 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:27:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.501 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.504 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.504 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.505 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.742 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.743 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:27:36 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.830 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing inventories for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.904 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating ProviderTree inventory for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.904 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Updating inventory in ProviderTree for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.923 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing aggregate associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.948 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Refreshing trait associations for resource provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_ABM,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  7 05:27:36 np0005549474 nova_compute[256753]: 2025-12-07 10:27:36.966 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:27:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:37.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:27:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1453650458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:27:37 np0005549474 nova_compute[256753]: 2025-12-07 10:27:37.439 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:27:37 np0005549474 nova_compute[256753]: 2025-12-07 10:27:37.445 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:27:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:37 np0005549474 nova_compute[256753]: 2025-12-07 10:27:37.470 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:27:37 np0005549474 nova_compute[256753]: 2025-12-07 10:27:37.472 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:27:37 np0005549474 nova_compute[256753]: 2025-12-07 10:27:37.472 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:27:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:38 np0005549474 nova_compute[256753]: 2025-12-07 10:27:38.473 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:38 np0005549474 nova_compute[256753]: 2025-12-07 10:27:38.473 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:27:38.638 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:27:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:27:38.639 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:27:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:27:38.639 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:27:38 np0005549474 nova_compute[256753]: 2025-12-07 10:27:38.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:38 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:38.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:27:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:38.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:39 np0005549474 nova_compute[256753]: 2025-12-07 10:27:39.296 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:39.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:40 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:40 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:40 np0005549474 nova_compute[256753]: 2025-12-07 10:27:40.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:27:40 np0005549474 nova_compute[256753]: 2025-12-07 10:27:40.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:27:40 np0005549474 nova_compute[256753]: 2025-12-07 10:27:40.755 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:27:40 np0005549474 nova_compute[256753]: 2025-12-07 10:27:40.782 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:27:40 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:41.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:42.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:27:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:27:42
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'vms']
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:27:42 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:27:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:27:43 np0005549474 podman[295172]: 2025-12-07 10:27:43.347825503 +0000 UTC m=+0.072120087 container exec 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:27:43 np0005549474 podman[295172]: 2025-12-07 10:27:43.439526473 +0000 UTC m=+0.163820977 container exec_died 25a7da5f7682ffe41b9da1c7003597adeef70d8a143031df351713215e06e303 (image=quay.io/ceph/ceph:v19, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  7 05:27:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:43 np0005549474 podman[295291]: 2025-12-07 10:27:43.994858844 +0000 UTC m=+0.070793901 container exec 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:27:44 np0005549474 podman[295291]: 2025-12-07 10:27:44.00093616 +0000 UTC m=+0.076871227 container exec_died 37d8d5644ed4727ac486f1f71356ae17be0b3a7e389280c6fd674bce5edd77be (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:27:44 np0005549474 nova_compute[256753]: 2025-12-07 10:27:44.297 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:44 np0005549474 nova_compute[256753]: 2025-12-07 10:27:44.300 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:44.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:44 np0005549474 podman[295381]: 2025-12-07 10:27:44.436358021 +0000 UTC m=+0.066671649 container exec a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:27:44 np0005549474 podman[295381]: 2025-12-07 10:27:44.448538803 +0000 UTC m=+0.078852431 container exec_died a4fe25d85321292dd92d6ed4b00781032c412e604e47993a7bacd8717161d6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:27:44 np0005549474 podman[295445]: 2025-12-07 10:27:44.694702225 +0000 UTC m=+0.066202736 container exec e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 05:27:44 np0005549474 podman[295445]: 2025-12-07 10:27:44.70553153 +0000 UTC m=+0.077032031 container exec_died e81de8242d06888165479e60d8fd2270a4720913e8aa13333744542bf7bb7b43 (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-0-ieiboq)
Dec  7 05:27:44 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:44 np0005549474 podman[295511]: 2025-12-07 10:27:44.971001968 +0000 UTC m=+0.075708975 container exec 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, vcs-type=git, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, name=keepalived, version=2.2.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  7 05:27:44 np0005549474 podman[295511]: 2025-12-07 10:27:44.979277134 +0000 UTC m=+0.083984081 container exec_died 65ad66b913ba1f3fc271a3043bcb52aa2909f95eca1d2b00c524bd55611ac97c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-0-vqhjze, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, release=1793, version=2.2.4)
Dec  7 05:27:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:45 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:45 np0005549474 podman[295578]: 2025-12-07 10:27:45.265142698 +0000 UTC m=+0.077507415 container exec d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:27:45 np0005549474 podman[295578]: 2025-12-07 10:27:45.300144931 +0000 UTC m=+0.112509588 container exec_died d0d1372f82dce3a3977f9441814159d88bcd14a9a4ca988a888028fa841bc54b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:27:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:45.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:45 np0005549474 podman[295652]: 2025-12-07 10:27:45.592500793 +0000 UTC m=+0.077364151 container exec d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 05:27:45 np0005549474 podman[295652]: 2025-12-07 10:27:45.79117668 +0000 UTC m=+0.276039978 container exec_died d859f070edcf366265beb0789710a2ef62047d699da56c28e419087efd2e7342 (image=quay.io/ceph/grafana:10.4.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 05:27:46 np0005549474 podman[295767]: 2025-12-07 10:27:46.29257818 +0000 UTC m=+0.077493094 container exec 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:27:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:46 np0005549474 podman[295767]: 2025-12-07 10:27:46.717975509 +0000 UTC m=+0.502890333 container exec_died 59201b8f159775399fabb5eb44cb7b509a07aa290f61dcf1df7d048becb066fc (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 05:27:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:27:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:27:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:46 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:27:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:47.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:47.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:27:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec  7 05:27:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 696 B/s rd, 0 op/s
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:47 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:27:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:48.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.381120793 +0000 UTC m=+0.043369773 container create 9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:27:48 np0005549474 systemd[1]: Started libpod-conmon-9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd.scope.
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.361998782 +0000 UTC m=+0.024247792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:27:48 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.496833959 +0000 UTC m=+0.159082969 container init 9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.505228377 +0000 UTC m=+0.167477357 container start 9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jackson, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.508402704 +0000 UTC m=+0.170651774 container attach 9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jackson, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 05:27:48 np0005549474 blissful_jackson[296000]: 167 167
Dec  7 05:27:48 np0005549474 systemd[1]: libpod-9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd.scope: Deactivated successfully.
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.514064348 +0000 UTC m=+0.176313348 container died 9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jackson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:27:48 np0005549474 systemd[1]: var-lib-containers-storage-overlay-3ef3caf0751cde04a1ef6e2042c208c684b71ad2b7981aff5b49a3af23eab66b-merged.mount: Deactivated successfully.
Dec  7 05:27:48 np0005549474 podman[295984]: 2025-12-07 10:27:48.566506947 +0000 UTC m=+0.228755957 container remove 9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jackson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:27:48 np0005549474 systemd[1]: libpod-conmon-9b6fb2fbecdb3f177de2d87deed06b4a49508f1a149b100fdc87b7974fb5a0dd.scope: Deactivated successfully.
Dec  7 05:27:48 np0005549474 podman[296024]: 2025-12-07 10:27:48.759729565 +0000 UTC m=+0.048650847 container create d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williams, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:27:48 np0005549474 systemd[1]: Started libpod-conmon-d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4.scope.
Dec  7 05:27:48 np0005549474 podman[296024]: 2025-12-07 10:27:48.737288194 +0000 UTC m=+0.026209526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:27:48 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:27:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81c1fe20bf2edea01ccb9dcdc7301c704271a3cbd9768ec47d9f92b04ce825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81c1fe20bf2edea01ccb9dcdc7301c704271a3cbd9768ec47d9f92b04ce825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81c1fe20bf2edea01ccb9dcdc7301c704271a3cbd9768ec47d9f92b04ce825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81c1fe20bf2edea01ccb9dcdc7301c704271a3cbd9768ec47d9f92b04ce825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:48 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81c1fe20bf2edea01ccb9dcdc7301c704271a3cbd9768ec47d9f92b04ce825/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:48 np0005549474 podman[296024]: 2025-12-07 10:27:48.850874641 +0000 UTC m=+0.139795943 container init d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 05:27:48 np0005549474 podman[296024]: 2025-12-07 10:27:48.865106789 +0000 UTC m=+0.154028071 container start d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williams, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 05:27:48 np0005549474 podman[296024]: 2025-12-07 10:27:48.86953322 +0000 UTC m=+0.158454552 container attach d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:27:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:48.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:49 np0005549474 funny_williams[296040]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:27:49 np0005549474 funny_williams[296040]: --> All data devices are unavailable
Dec  7 05:27:49 np0005549474 systemd[1]: libpod-d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4.scope: Deactivated successfully.
Dec  7 05:27:49 np0005549474 podman[296056]: 2025-12-07 10:27:49.244857313 +0000 UTC m=+0.035309074 container died d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williams, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:27:49 np0005549474 systemd[1]: var-lib-containers-storage-overlay-0e81c1fe20bf2edea01ccb9dcdc7301c704271a3cbd9768ec47d9f92b04ce825-merged.mount: Deactivated successfully.
Dec  7 05:27:49 np0005549474 podman[296056]: 2025-12-07 10:27:49.282001686 +0000 UTC m=+0.072453447 container remove d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_williams, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 05:27:49 np0005549474 systemd[1]: libpod-conmon-d633c6b2ca0e6e17424b052097a199b5f0a330f4d65e2a64a3c9a8d51d2152b4.scope: Deactivated successfully.
Dec  7 05:27:49 np0005549474 nova_compute[256753]: 2025-12-07 10:27:49.298 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec  7 05:27:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:49] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:50.000744161 +0000 UTC m=+0.040742081 container create ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:27:50 np0005549474 systemd[1]: Started libpod-conmon-ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965.scope.
Dec  7 05:27:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:50.075800368 +0000 UTC m=+0.115798208 container init ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:49.982168185 +0000 UTC m=+0.022166015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:50.083844197 +0000 UTC m=+0.123842017 container start ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ishizaka, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:50.087480966 +0000 UTC m=+0.127478776 container attach ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:27:50 np0005549474 sharp_ishizaka[296177]: 167 167
Dec  7 05:27:50 np0005549474 systemd[1]: libpod-ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965.scope: Deactivated successfully.
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:50.091150457 +0000 UTC m=+0.131148267 container died ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 05:27:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-9e37b8bc8d6f140825969769ee3a964792a06af599333a5d88441d7c47ccf663-merged.mount: Deactivated successfully.
Dec  7 05:27:50 np0005549474 podman[296160]: 2025-12-07 10:27:50.129215864 +0000 UTC m=+0.169213654 container remove ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:27:50 np0005549474 systemd[1]: libpod-conmon-ed079018b365a9217bcd914a00d47b60120f6abb5bc66c0c4a9c77f83f71b965.scope: Deactivated successfully.
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.316953983 +0000 UTC m=+0.063436100 container create 785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:27:50 np0005549474 systemd[1]: Started libpod-conmon-785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c.scope.
Dec  7 05:27:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.287664874 +0000 UTC m=+0.034147091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:27:50 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:27:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48acb79cb2de373072c2a59e7a2b1ffe93b4296106cf09a27e5f312f6d13e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48acb79cb2de373072c2a59e7a2b1ffe93b4296106cf09a27e5f312f6d13e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48acb79cb2de373072c2a59e7a2b1ffe93b4296106cf09a27e5f312f6d13e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:50 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48acb79cb2de373072c2a59e7a2b1ffe93b4296106cf09a27e5f312f6d13e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.419997772 +0000 UTC m=+0.166479909 container init 785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_turing, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.432143573 +0000 UTC m=+0.178625690 container start 785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_turing, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.436264026 +0000 UTC m=+0.182746173 container attach 785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_turing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]: {
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:    "0": [
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:        {
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "devices": [
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "/dev/loop3"
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            ],
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "lv_name": "ceph_lv0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "lv_size": "21470642176",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "name": "ceph_lv0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "tags": {
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.cluster_name": "ceph",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.crush_device_class": "",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.encrypted": "0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.osd_id": "0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.type": "block",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.vdo": "0",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:                "ceph.with_tpm": "0"
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            },
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "type": "block",
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:            "vg_name": "ceph_vg0"
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:        }
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]:    ]
Dec  7 05:27:50 np0005549474 dreamy_turing[296217]: }
Dec  7 05:27:50 np0005549474 systemd[1]: libpod-785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c.scope: Deactivated successfully.
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.819616688 +0000 UTC m=+0.566098835 container died 785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 05:27:50 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b48acb79cb2de373072c2a59e7a2b1ffe93b4296106cf09a27e5f312f6d13e1c-merged.mount: Deactivated successfully.
Dec  7 05:27:50 np0005549474 podman[296201]: 2025-12-07 10:27:50.872420458 +0000 UTC m=+0.618902575 container remove 785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 05:27:50 np0005549474 systemd[1]: libpod-conmon-785a141284774a828e19d35a036eb4409929da9126303e3e74bae1f99ecb473c.scope: Deactivated successfully.
Dec  7 05:27:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec  7 05:27:51 np0005549474 podman[296353]: 2025-12-07 10:27:51.584338188 +0000 UTC m=+0.039526149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:27:52 np0005549474 podman[296353]: 2025-12-07 10:27:52.137029016 +0000 UTC m=+0.592216937 container create f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:27:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:52.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:52 np0005549474 systemd[1]: Started libpod-conmon-f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71.scope.
Dec  7 05:27:52 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:27:53 np0005549474 podman[296353]: 2025-12-07 10:27:53.330153016 +0000 UTC m=+1.785340937 container init f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:27:53 np0005549474 podman[296353]: 2025-12-07 10:27:53.341069324 +0000 UTC m=+1.796257245 container start f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_golick, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 05:27:53 np0005549474 epic_golick[296371]: 167 167
Dec  7 05:27:53 np0005549474 systemd[1]: libpod-f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71.scope: Deactivated successfully.
Dec  7 05:27:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:53.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:53 np0005549474 podman[296353]: 2025-12-07 10:27:53.599066977 +0000 UTC m=+2.054254928 container attach f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_golick, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:27:53 np0005549474 podman[296353]: 2025-12-07 10:27:53.602943703 +0000 UTC m=+2.058131624 container died f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec  7 05:27:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec  7 05:27:53 np0005549474 systemd[1]: var-lib-containers-storage-overlay-b28141161b772f942062335441af7b41f8106d8854f176f14c0c966ededd307e-merged.mount: Deactivated successfully.
Dec  7 05:27:53 np0005549474 podman[296353]: 2025-12-07 10:27:53.65965726 +0000 UTC m=+2.114845141 container remove f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 05:27:53 np0005549474 systemd[1]: libpod-conmon-f802217de0a2cb4c078b4084b1ab6642e1b801207a8b070adf64891260ea6f71.scope: Deactivated successfully.
Dec  7 05:27:53 np0005549474 podman[296377]: 2025-12-07 10:27:53.72096517 +0000 UTC m=+0.329975326 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  7 05:27:53 np0005549474 podman[296384]: 2025-12-07 10:27:53.825726517 +0000 UTC m=+0.430021165 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  7 05:27:53 np0005549474 podman[296438]: 2025-12-07 10:27:53.870374695 +0000 UTC m=+0.052365509 container create 42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 05:27:53 np0005549474 systemd[1]: Started libpod-conmon-42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259.scope.
Dec  7 05:27:53 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:27:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985a2e5a30cf794aacb1d209a80dc002881af8076c7f699d0764fce2bd5a595e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:53 np0005549474 podman[296438]: 2025-12-07 10:27:53.850355479 +0000 UTC m=+0.032346333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:27:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985a2e5a30cf794aacb1d209a80dc002881af8076c7f699d0764fce2bd5a595e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985a2e5a30cf794aacb1d209a80dc002881af8076c7f699d0764fce2bd5a595e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:53 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985a2e5a30cf794aacb1d209a80dc002881af8076c7f699d0764fce2bd5a595e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:27:53 np0005549474 podman[296438]: 2025-12-07 10:27:53.954045366 +0000 UTC m=+0.136036180 container init 42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 05:27:53 np0005549474 podman[296438]: 2025-12-07 10:27:53.966832574 +0000 UTC m=+0.148823378 container start 42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 05:27:53 np0005549474 podman[296438]: 2025-12-07 10:27:53.969786165 +0000 UTC m=+0.151776979 container attach 42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:27:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:53 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:54 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:54 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:54 np0005549474 nova_compute[256753]: 2025-12-07 10:27:54.301 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:54 np0005549474 lvm[296537]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:27:54 np0005549474 lvm[296537]: VG ceph_vg0 finished
Dec  7 05:27:54 np0005549474 eloquent_williamson[296462]: {}
Dec  7 05:27:54 np0005549474 systemd[1]: libpod-42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259.scope: Deactivated successfully.
Dec  7 05:27:54 np0005549474 systemd[1]: libpod-42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259.scope: Consumed 1.125s CPU time.
Dec  7 05:27:54 np0005549474 conmon[296462]: conmon 42162f8669110d7aeee2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259.scope/container/memory.events
Dec  7 05:27:54 np0005549474 podman[296438]: 2025-12-07 10:27:54.734794652 +0000 UTC m=+0.916785476 container died 42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 05:27:54 np0005549474 systemd[1]: var-lib-containers-storage-overlay-985a2e5a30cf794aacb1d209a80dc002881af8076c7f699d0764fce2bd5a595e-merged.mount: Deactivated successfully.
Dec  7 05:27:54 np0005549474 podman[296438]: 2025-12-07 10:27:54.778276388 +0000 UTC m=+0.960267222 container remove 42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:27:54 np0005549474 systemd[1]: libpod-conmon-42162f8669110d7aeee28aefe34e748865046df56b62c485a36136bf1ee0a259.scope: Deactivated successfully.
Dec  7 05:27:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 05:27:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 05:27:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:55.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec  7 05:27:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:55 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:27:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:57.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:27:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:27:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:27:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:27:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:57.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:27:57 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:27:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:58.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:27:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:58.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:27:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:27:58.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:27:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:58 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:27:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:27:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:27:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:27:59 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:27:59 np0005549474 podman[296585]: 2025-12-07 10:27:59.295165348 +0000 UTC m=+0.095183086 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 05:27:59 np0005549474 nova_compute[256753]: 2025-12-07 10:27:59.303 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:59 np0005549474 nova_compute[256753]: 2025-12-07 10:27:59.305 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:27:59 np0005549474 nova_compute[256753]: 2025-12-07 10:27:59.305 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:27:59 np0005549474 nova_compute[256753]: 2025-12-07 10:27:59.305 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:59 np0005549474 nova_compute[256753]: 2025-12-07 10:27:59.329 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:27:59 np0005549474 nova_compute[256753]: 2025-12-07 10:27:59.330 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:27:59 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:27:59 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:27:59 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:59.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:27:59 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:27:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:59] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Dec  7 05:27:59 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:27:59] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Dec  7 05:28:00 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:00 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:00 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:01 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:01 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:01 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:01.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:01 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:02 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:02 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:02 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:02.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  7 05:28:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658643401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  7 05:28:02 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  7 05:28:02 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658643401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  7 05:28:03 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:03 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:03 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:03 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:03 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:03 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:03 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:03 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:04 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:04 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:04 np0005549474 nova_compute[256753]: 2025-12-07 10:28:04.332 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:04 np0005549474 nova_compute[256753]: 2025-12-07 10:28:04.333 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:04 np0005549474 nova_compute[256753]: 2025-12-07 10:28:04.334 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:28:04 np0005549474 nova_compute[256753]: 2025-12-07 10:28:04.334 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:04 np0005549474 nova_compute[256753]: 2025-12-07 10:28:04.335 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:04 np0005549474 nova_compute[256753]: 2025-12-07 10:28:04.337 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:04 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:04 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:28:04 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:04.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:28:05 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:05 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:05 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:05 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:06 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:06 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:06 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:06.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:07 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:07.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:07 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:07 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:07 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:07 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:08 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:08 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:08 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:08.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:08 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:08 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:08.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:08 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:08 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:08 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:09 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:09 np0005549474 nova_compute[256753]: 2025-12-07 10:28:09.333 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:09 np0005549474 nova_compute[256753]: 2025-12-07 10:28:09.337 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:09 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:09 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:09 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:09.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:09 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:09 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:09] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Dec  7 05:28:09 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:09] "GET /metrics HTTP/1.1" 200 48378 "" "Prometheus/2.51.0"
Dec  7 05:28:10 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:10 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:10 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:10.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:11 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:11 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:11 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:11.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:11 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:12 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:12 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:12 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:12.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:12 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:28:12 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:28:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:28:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:28:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:28:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:28:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:28:12 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:28:13 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:13 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:13 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:13 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:13.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:13 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:13 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:13 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:14 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:14 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:14 np0005549474 nova_compute[256753]: 2025-12-07 10:28:14.335 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:14 np0005549474 nova_compute[256753]: 2025-12-07 10:28:14.338 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:14 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:14 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:14 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:14.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:15 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:15 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:15 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:15.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:15 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:16 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:16 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:16 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:16.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:17 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:17.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:17 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:17 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:17 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:17.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:17 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:18 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:18 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:18 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.427184) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298427234, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1292, "num_deletes": 257, "total_data_size": 2366241, "memory_usage": 2414744, "flush_reason": "Manual Compaction"}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298440088, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2295859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37047, "largest_seqno": 38338, "table_properties": {"data_size": 2289816, "index_size": 3306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12910, "raw_average_key_size": 19, "raw_value_size": 2277552, "raw_average_value_size": 3466, "num_data_blocks": 145, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765103180, "oldest_key_time": 1765103180, "file_creation_time": 1765103298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 12983 microseconds, and 4815 cpu microseconds.
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.440157) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2295859 bytes OK
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.440187) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.445687) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.445784) EVENT_LOG_v1 {"time_micros": 1765103298445774, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.445815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2360516, prev total WAL file size 2360516, number of live WAL files 2.
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.447095) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2242KB)], [80(12MB)]
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298447164, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15424600, "oldest_snapshot_seqno": -1}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6964 keys, 15288886 bytes, temperature: kUnknown
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298570340, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15288886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15243061, "index_size": 27281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183133, "raw_average_key_size": 26, "raw_value_size": 15118308, "raw_average_value_size": 2170, "num_data_blocks": 1073, "num_entries": 6964, "num_filter_entries": 6964, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100347, "oldest_key_time": 0, "file_creation_time": 1765103298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "db64e6a7-bc4f-4cbe-9d35-a6ad1c82a687", "db_session_id": "JT62X3AUQJPC1MNA6VWA", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.570686) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15288886 bytes
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.573455) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.1 rd, 124.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 12.5 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(13.4) write-amplify(6.7) OK, records in: 7492, records dropped: 528 output_compression: NoCompression
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.573485) EVENT_LOG_v1 {"time_micros": 1765103298573471, "job": 46, "event": "compaction_finished", "compaction_time_micros": 123274, "compaction_time_cpu_micros": 61981, "output_level": 6, "num_output_files": 1, "total_output_size": 15288886, "num_input_records": 7492, "num_output_records": 6964, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298574512, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298579067, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.446902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.579260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.579271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.579274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.579277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:28:18 np0005549474 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/07-10:28:18.579280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 05:28:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:18.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:28:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:18.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  7 05:28:18 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:18.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:18 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:18 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:18 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:19 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:19 np0005549474 nova_compute[256753]: 2025-12-07 10:28:19.338 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:19 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:19 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:19 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:19.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:19 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:19 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:28:19 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec  7 05:28:20 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:20 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:20 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:20.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:21 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:21 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:21 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:21.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:21 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:22 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:22 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:22 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:23 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:23 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:23 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:23 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:23 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:23 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:23 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:23 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:24 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:24 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:24 np0005549474 podman[296660]: 2025-12-07 10:28:24.258090997 +0000 UTC m=+0.069229439 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  7 05:28:24 np0005549474 podman[296661]: 2025-12-07 10:28:24.292997209 +0000 UTC m=+0.097134569 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  7 05:28:24 np0005549474 nova_compute[256753]: 2025-12-07 10:28:24.338 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:24 np0005549474 nova_compute[256753]: 2025-12-07 10:28:24.340 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:24 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:24 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:24 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:24.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:25 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:25 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:25 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:25.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:25 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:26 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:26 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:26 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:27 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:27.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:27 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:28:27 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:28:27 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:27 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:27 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:27.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:27 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:28 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:28 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:28:28 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:28.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:28:28 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:28 np0005549474 nova_compute[256753]: 2025-12-07 10:28:28.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:28 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:28.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:28 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:28 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:29 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:29 np0005549474 nova_compute[256753]: 2025-12-07 10:28:29.341 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:29 np0005549474 nova_compute[256753]: 2025-12-07 10:28:29.343 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:29 np0005549474 nova_compute[256753]: 2025-12-07 10:28:29.343 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:28:29 np0005549474 nova_compute[256753]: 2025-12-07 10:28:29.343 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:29 np0005549474 nova_compute[256753]: 2025-12-07 10:28:29.361 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:29 np0005549474 nova_compute[256753]: 2025-12-07 10:28:29.361 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:29 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:29 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:29 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:29.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:29 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:29 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:29] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Dec  7 05:28:29 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:29] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Dec  7 05:28:30 np0005549474 podman[296711]: 2025-12-07 10:28:30.276492266 +0000 UTC m=+0.083646421 container health_status cd0d9666811879c68212870a37f9ed2870e47ec05111250c3850134f0ed16e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  7 05:28:30 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:30 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:30 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:31 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:31 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:31 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:31.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:31 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:32 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:32 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:28:32 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:28:32 np0005549474 nova_compute[256753]: 2025-12-07 10:28:32.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:32 np0005549474 nova_compute[256753]: 2025-12-07 10:28:32.753 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  7 05:28:33 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:33 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:33 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:33 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:33.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:33 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:33 np0005549474 systemd-logind[796]: New session 59 of user zuul.
Dec  7 05:28:33 np0005549474 systemd[1]: Started Session 59 of User zuul.
Dec  7 05:28:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:33 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:34 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:34 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:34 np0005549474 nova_compute[256753]: 2025-12-07 10:28:34.362 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:34 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:34 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:34 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:35 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:35 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:35 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:35.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:35 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:35 np0005549474 nova_compute[256753]: 2025-12-07 10:28:35.754 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:36 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27272 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:36 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18414 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:36 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:36 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:36 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:36.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:36 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27511 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.752 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.779 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.780 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.780 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.780 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  7 05:28:36 np0005549474 nova_compute[256753]: 2025-12-07 10:28:36.781 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:28:36 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27281 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:36 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18423 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:28:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1298980407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.199 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:28:37 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27523 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:37 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:37.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  7 05:28:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125063758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.384 256757 WARNING nova.virt.libvirt.driver [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.385 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4415MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.385 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.386 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.440 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.442 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.461 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  7 05:28:37 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:37 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:28:37 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:37.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:28:37 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:37 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  7 05:28:37 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1033101545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.921 256757 DEBUG oslo_concurrency.processutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.929 256757 DEBUG nova.compute.provider_tree [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed in ProviderTree for provider: 7e48a19e-1e29-4c67-8ffa-7daf855825bb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.946 256757 DEBUG nova.scheduler.client.report [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Inventory has not changed for provider 7e48a19e-1e29-4c67-8ffa-7daf855825bb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.949 256757 DEBUG nova.compute.resource_tracker [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  7 05:28:37 np0005549474 nova_compute[256753]: 2025-12-07 10:28:37.950 256757 DEBUG oslo_concurrency.lockutils [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:28:38 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:38 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:38 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:38 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:28:38.640 164143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  7 05:28:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:28:38.640 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  7 05:28:38 np0005549474 ovn_metadata_agent[164137]: 2025-12-07 10:28:38.641 164143 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  7 05:28:38 np0005549474 nova_compute[256753]: 2025-12-07 10:28:38.947 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:38 np0005549474 nova_compute[256753]: 2025-12-07 10:28:38.947 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:38 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:38.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:38 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:38 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:38 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:39 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.363 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.364 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.364 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.365 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.366 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.368 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:39 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:39 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:39 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:39 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:39 np0005549474 nova_compute[256753]: 2025-12-07 10:28:39.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:39 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:39] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Dec  7 05:28:39 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:39] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Dec  7 05:28:40 np0005549474 ovs-vsctl[297133]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  7 05:28:40 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:40 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:40 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:40 np0005549474 nova_compute[256753]: 2025-12-07 10:28:40.753 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:40 np0005549474 nova_compute[256753]: 2025-12-07 10:28:40.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  7 05:28:40 np0005549474 nova_compute[256753]: 2025-12-07 10:28:40.754 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  7 05:28:40 np0005549474 nova_compute[256753]: 2025-12-07 10:28:40.767 256757 DEBUG nova.compute.manager [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  7 05:28:41 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  7 05:28:41 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  7 05:28:41 np0005549474 virtqemud[256299]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  7 05:28:41 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:41 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:41 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:41 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:41 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: cache status {prefix=cache status} (starting...)
Dec  7 05:28:41 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:41 np0005549474 lvm[297471]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 05:28:41 np0005549474 lvm[297471]: VG ceph_vg0 finished
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: client ls {prefix=client ls} (starting...)
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:42 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:42 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:42 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:42.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Optimize plan auto_2025-12-07_10:28:42
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] do_upmap
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', '.nfs']
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 05:28:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:28:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18459 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27314 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: damage ls {prefix=damage ls} (starting...)
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump loads {prefix=dump loads} (starting...)
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:42 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  7 05:28:42 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928733236' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  7 05:28:42 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18477 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  7 05:28:42 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27335 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2189176137' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18489 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27353 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27535 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:43 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:43 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:43 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:43.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83704881' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  7 05:28:43 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27380 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  7 05:28:43 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  7 05:28:43 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27553 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:43 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:44 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:44 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:44 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: ops {prefix=ops} (starting...)
Dec  7 05:28:44 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  7 05:28:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282347237' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  7 05:28:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298639853' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27568 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:44 np0005549474 nova_compute[256753]: 2025-12-07 10:28:44.366 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:44 np0005549474 nova_compute[256753]: 2025-12-07 10:28:44.368 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:44 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:44 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:44 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:44.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:44 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27410 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  7 05:28:44 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609356737' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27428 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: session ls {prefix=session ls} (starting...)
Dec  7 05:28:44 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk Can't run that command on an inactive MDS!
Dec  7 05:28:44 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27583 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18570 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:44 np0005549474 ceph-mds[97301]: mds.cephfs.compute-0.qgzqbk asok_command: status {prefix=status} (starting...)
Dec  7 05:28:45 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27437 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497187814' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543051868' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810979409' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  7 05:28:45 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:45 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27607 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:45 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:45 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:45.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:45 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218383073' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1966025728' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27625 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 05:28:45 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345442847' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 05:28:46 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18636 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:46 np0005549474 ceph-mgr[74811]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:28:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T10:28:46.041+0000 7f2c9a7e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:28:46 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27491 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:46 np0005549474 ceph-mgr[74811]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:28:46 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T10:28:46.297+0000 7f2c9a7e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709281894' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  7 05:28:46 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:46 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:46 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:46.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269760533' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206540178' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  7 05:28:46 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833726415' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18690 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:47.267Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27670 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:28:47 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: 2025-12-07T10:28:47.338+0000 7f2c9a7e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  7 05:28:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  7 05:28:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390567391' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27527 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:47 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:47 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:47 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:47.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18717 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:47 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  7 05:28:47 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/396672336' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27554 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:47 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18744 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930286 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 1622016 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 1605632 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.163699150s of 11.219295502s, submitted: 11
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 1564672 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7fa2000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930306 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 1556480 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 53.619079590s of 53.630729675s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 1540096 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930454 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931966 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931966 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 1515520 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.790068626s of 13.823327065s, submitted: 10
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931666 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e8370b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 1499136 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931818 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.882770538s of 37.894630432s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931950 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 1490944 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933478 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 1474560 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.014238358s of 12.047296524s, submitted: 10
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932719 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554e800 session 0x55c4e7f2ab40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 1466368 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e619cf00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932739 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.964462280s of 27.972938538s, submitted: 2
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933003 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934515 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 1458176 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.966125488s of 11.298370361s, submitted: 13
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935436 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 1449984 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 1441792 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554f000 session 0x55c4e6726780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7c9f680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935172 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 1433600 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.646772385s of 26.672891617s, submitted: 4
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 1425408 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935284 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 1417216 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1409024 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934861 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.061220169s of 12.097341537s, submitted: 11
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934861 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e73e85a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.280908585s of 28.311281204s, submitted: 8
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 1392640 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934729 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 1392640 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 1392640 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934729 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 1368064 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934729 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.471632004s of 16.617654800s, submitted: 9
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e71983c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread fragmentation_score=0.000029 took=0.000052s
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934581 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7653 writes, 30K keys, 7653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7653 writes, 1575 syncs, 4.86 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 765 writes, 1338 keys, 765 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 765 writes, 379 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c4e39cf350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1351680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1335296 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.645007133s of 15.648278236s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934713 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936241 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 229376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936241 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 212992 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.669661522s of 12.726916313s, submitted: 10
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935941 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 180224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e558b000 session 0x55c4e71981e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936093 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936093 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.996849060s of 18.999956131s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936225 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936241 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 270336 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 253952 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936994 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.219687462s of 13.265237808s, submitted: 12
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e54bd680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937014 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.302152634s of 18.305004120s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 237568 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937162 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 212992 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 196608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 188416 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 172032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938542 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 172032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.008382797s of 10.192886353s, submitted: 11
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554e800 session 0x55c4e73efc20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937951 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937951 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.604965210s of 13.608486176s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938067 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 163840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 147456 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e558b000 session 0x55c4e6229e00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.548464775s of 14.559932709s, submitted: 4
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938083 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941091 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 73728 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e794ef00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 73728 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 73728 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.014322281s of 15.062206268s, submitted: 15
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940959 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 57344 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 57344 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940959 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941091 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 40960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.579513550s of 12.591442108s, submitted: 3
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 16384 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 16384 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 16384 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941107 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 8192 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 8192 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 0 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940348 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939909 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.942881584s of 13.975779533s, submitted: 10
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e6e8d400 session 0x55c4e7466000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 1048576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939777 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939777 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1040384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.855612755s of 10.858253479s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 983040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fca64000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [1])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 770048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 770048 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939925 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 761856 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939925 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e8336d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 753664 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939925 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.921133041s of 17.504179001s, submitted: 234
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939625 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 745472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939909 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 737280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942949 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.817281723s of 12.883481026s, submitted: 8
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 720896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 720896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 720896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942949 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942649 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e83854a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942801 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 696320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.492683411s of 31.511125565s, submitted: 6
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942933 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 712704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 704512 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942949 data_alloc: 218103808 data_used: 98304
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942342 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.010691643s of 12.045362473s, submitted: 10
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 688128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941619 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 ms_handle_reset con 0x55c4e554f000 session 0x55c4e82e70e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc654000/0x0/0x4ffc00000, data 0x1056a5/0x1b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.260337830s of 119.267936707s, submitted: 2
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 729088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945385 data_alloc: 218103808 data_used: 102400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 720896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 151 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e79b2b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 18440192 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 18309120 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb568000/0x0/0x4ffc00000, data 0x11eba2f/0x12a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 152 ms_handle_reset con 0x55c4e5d9d400 session 0x55c4e531c5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fb564000/0x0/0x4ffc00000, data 0x11edb37/0x12a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073756 data_alloc: 218103808 data_used: 110592
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fb564000/0x0/0x4ffc00000, data 0x11edb37/0x12a7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7ca0d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 18292736 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076318 data_alloc: 218103808 data_used: 110592
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb561000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 18284544 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 18284544 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.473200798s of 12.728899956s, submitted: 91
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075494 data_alloc: 218103808 data_used: 106496
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 18276352 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075626 data_alloc: 218103808 data_used: 106496
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 18268160 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 18268160 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 18268160 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.053786278s of 11.139539719s, submitted: 6
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 18251776 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 18251776 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075494 data_alloc: 218103808 data_used: 106496
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 18251776 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 18235392 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 18235392 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 18235392 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 18227200 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076990 data_alloc: 218103808 data_used: 110592
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 18227200 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 18227200 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076858 data_alloc: 218103808 data_used: 110592
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 18219008 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7199a40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e7475a40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e531cf00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 18210816 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.845787048s of 14.884376526s, submitted: 11
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e73ba800 session 0x55c4e54bd680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e558b000 session 0x55c4e794f2c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 6709248 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 6709248 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e89d05a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb562000/0x0/0x4ffc00000, data 0x11efb09/0x12aa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107106 data_alloc: 234881024 data_used: 11579392
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 6709248 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 6692864 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e89d0960
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e73ba000 session 0x55c4e89d0d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e73bf800 session 0x55c4e89d0f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e558b000 session 0x55c4e89d12c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e89d1680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 6594560 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb553000/0x0/0x4ffc00000, data 0x11f9d45/0x12b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 6594560 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118732 data_alloc: 234881024 data_used: 11579392
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb553000/0x0/0x4ffc00000, data 0x11f9d45/0x12b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb553000/0x0/0x4ffc00000, data 0x11f9d45/0x12b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 6561792 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 6545408 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.737378120s of 10.802026749s, submitted: 19
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 6520832 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120694 data_alloc: 234881024 data_used: 11603968
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120694 data_alloc: 234881024 data_used: 11603968
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb551000/0x0/0x4ffc00000, data 0x11fbd17/0x12ba000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 6438912 heap: 103776256 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.648746490s of 11.656969070s, submitted: 15
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120862 data_alloc: 234881024 data_used: 11599872
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 1916928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 2146304 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d98000/0x0/0x4ffc00000, data 0x180dd17/0x18cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168960 data_alloc: 234881024 data_used: 11628544
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d98000/0x0/0x4ffc00000, data 0x180dd17/0x18cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2301952 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9d000/0x0/0x4ffc00000, data 0x1810d17/0x18cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9d000/0x0/0x4ffc00000, data 0x1810d17/0x18cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165024 data_alloc: 234881024 data_used: 11628544
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9d000/0x0/0x4ffc00000, data 0x1810d17/0x18cf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.746442795s of 12.949378967s, submitted: 59
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9c000/0x0/0x4ffc00000, data 0x1811d17/0x18d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9c000/0x0/0x4ffc00000, data 0x1811d17/0x18d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165248 data_alloc: 234881024 data_used: 11628544
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9c000/0x0/0x4ffc00000, data 0x1811d17/0x18d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e7b803c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c80c00 session 0x55c4e7c9d0e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8dc00 session 0x55c4e619c000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e74661e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7f2ab40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 2572288 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e6246000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e61f2b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 1220608 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c80c00 session 0x55c4e619cf00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e83843c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e531c780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e75ee1e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e8444f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183551 data_alloc: 234881024 data_used: 12087296
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9c22000/0x0/0x4ffc00000, data 0x198bd17/0x1a4a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c15400 session 0x55c4e7f2a5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7628b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183551 data_alloc: 234881024 data_used: 12087296
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 3596288 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7c9cf00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.767366409s of 12.866190910s, submitted: 26
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7951860
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104333312 unmapped: 3637248 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 4186112 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 3981312 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188841 data_alloc: 234881024 data_used: 12091392
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [1,0,0,1])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 3923968 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfc000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 3915776 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190673 data_alloc: 234881024 data_used: 12087296
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfc000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 3915776 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9bfd000/0x0/0x4ffc00000, data 0x19afd27/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.556396484s of 12.604061127s, submitted: 14
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 3891200 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204693 data_alloc: 234881024 data_used: 12136448
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105209856 unmapped: 3809280 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9beb000/0x0/0x4ffc00000, data 0x19c1d27/0x1a81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 3637248 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9a86000/0x0/0x4ffc00000, data 0x1b1dd27/0x1bdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3620864 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3620864 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9a12000/0x0/0x4ffc00000, data 0x1b99d27/0x1c59000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 3604480 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213064 data_alloc: 234881024 data_used: 12189696
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 3604480 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 3604480 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x1bbad27/0x1c7a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212024 data_alloc: 234881024 data_used: 12189696
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3375104 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.053034782s of 12.220693588s, submitted: 45
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e8454f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2d000 session 0x55c4e6246d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e73f0000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x1bbad27/0x1c7a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9b000/0x0/0x4ffc00000, data 0x1812d17/0x18d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172459 data_alloc: 234881024 data_used: 11890688
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9b000/0x0/0x4ffc00000, data 0x1812d17/0x18d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 3481600 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9d9b000/0x0/0x4ffc00000, data 0x1812d17/0x18d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e89d1a40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7947680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127305 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 3694592 heap: 109019136 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.848108292s of 30.943441391s, submitted: 31
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e89ca5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa033000/0x0/0x4ffc00000, data 0x157bd07/0x1639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f000 session 0x55c4e84452c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f000 session 0x55c4e7472780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155793 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e82e7a40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e79b23c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7c9c000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa033000/0x0/0x4ffc00000, data 0x157bd07/0x1639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 105332736 unmapped: 5791744 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa032000/0x0/0x4ffc00000, data 0x157bd17/0x163a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182383 data_alloc: 234881024 data_used: 15511552
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa032000/0x0/0x4ffc00000, data 0x157bd17/0x163a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2498560 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.281540871s of 12.335005760s, submitted: 7
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2777088 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182011 data_alloc: 234881024 data_used: 15511552
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2777088 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2777088 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 2768896 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 2768896 heap: 111124480 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f976d000/0x0/0x4ffc00000, data 0x1e40d17/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 5152768 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248907 data_alloc: 234881024 data_used: 15556608
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9733000/0x0/0x4ffc00000, data 0x1e74d17/0x1f33000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 3776512 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 3776512 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 3776512 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243260 data_alloc: 234881024 data_used: 15556608
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.510517120s of 15.721278191s, submitted: 62
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243280 data_alloc: 234881024 data_used: 15560704
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243280 data_alloc: 234881024 data_used: 15560704
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554e800 session 0x55c4e71992c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f972d000/0x0/0x4ffc00000, data 0x1e80d17/0x1f3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243280 data_alloc: 234881024 data_used: 15560704
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 4734976 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c25800 session 0x55c4e7ba1e00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07800 session 0x55c4e7c510e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.520026207s of 12.524559021s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e54bcf00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132603 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405987548' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132735 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6668288 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.115133286s of 10.144872665s, submitted: 10
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 6660096 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 6660096 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 6660096 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134263 data_alloc: 234881024 data_used: 11837440
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134263 data_alloc: 234881024 data_used: 11837440
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.573747635s of 11.605253220s, submitted: 9
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133963 data_alloc: 234881024 data_used: 11837440
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 6651904 heap: 114982912 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 9994240 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e558b000 session 0x55c4e7946b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172001 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166fd07/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 13139968 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e83852c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f3f000/0x0/0x4ffc00000, data 0x166fd07/0x172d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 13369344 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 13369344 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196357 data_alloc: 234881024 data_used: 15122432
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 12320768 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196357 data_alloc: 234881024 data_used: 15122432
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f1b000/0x0/0x4ffc00000, data 0x1693d07/0x1751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 12296192 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.439218521s of 21.482173920s, submitted: 13
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237173 data_alloc: 234881024 data_used: 15175680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 10264576 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9937000/0x0/0x4ffc00000, data 0x1c6ed07/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x1c6fd07/0x1d2d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e7c512c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252145 data_alloc: 234881024 data_used: 15360000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x1c6fd07/0x1d2d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 10330112 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247489 data_alloc: 234881024 data_used: 15360000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f991e000/0x0/0x4ffc00000, data 0x1c90d07/0x1d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f991e000/0x0/0x4ffc00000, data 0x1c90d07/0x1d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 10731520 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.988185883s of 13.187009811s, submitted: 63
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247593 data_alloc: 234881024 data_used: 15360000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9915000/0x0/0x4ffc00000, data 0x1c99d07/0x1d57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 10649600 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e84445a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 11190272 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 11190272 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272861 data_alloc: 234881024 data_used: 15360000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 11190272 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f966b000/0x0/0x4ffc00000, data 0x1f43d07/0x2001000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 11182080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 11182080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 9158 writes, 34K keys, 9158 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9158 writes, 2255 syncs, 4.06 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1505 writes, 4088 keys, 1505 commit groups, 1.0 writes per commit group, ingest: 3.55 MB, 0.01 MB/s#012Interval WAL: 1505 writes, 680 syncs, 2.21 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 11182080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.118002892s of 11.166302681s, submitted: 14
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e5c7cb40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 11616256 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f966b000/0x0/0x4ffc00000, data 0x1f43d07/0x2001000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274889 data_alloc: 234881024 data_used: 15360000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 11616256 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 109551616 unmapped: 11730944 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110747648 unmapped: 10534912 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9647000/0x0/0x4ffc00000, data 0x1f67d07/0x2025000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110747648 unmapped: 10534912 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110755840 unmapped: 10526720 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292521 data_alloc: 234881024 data_used: 17858560
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 10379264 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9647000/0x0/0x4ffc00000, data 0x1f67d07/0x2025000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 10362880 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 10362880 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292725 data_alloc: 234881024 data_used: 17858560
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 110911488 unmapped: 10371072 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.492300034s of 12.506669044s, submitted: 4
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9647000/0x0/0x4ffc00000, data 0x1f67d07/0x2025000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 4939776 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9144000/0x0/0x4ffc00000, data 0x246ad07/0x2528000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 4243456 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9107000/0x0/0x4ffc00000, data 0x24a7d07/0x2565000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 4210688 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342291 data_alloc: 234881024 data_used: 18628608
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 4079616 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 4079616 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 4079616 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9107000/0x0/0x4ffc00000, data 0x24a7d07/0x2565000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 4046848 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115367936 unmapped: 5914624 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340323 data_alloc: 234881024 data_used: 18628608
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 5775360 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2c400 session 0x55c4e7b9eb40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c28000 session 0x55c4e67243c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 5775360 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.908575058s of 10.101375580s, submitted: 82
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c28000 session 0x55c4e73e85a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9912000/0x0/0x4ffc00000, data 0x1c9cd07/0x1d5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9912000/0x0/0x4ffc00000, data 0x1c9cd07/0x1d5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253951 data_alloc: 234881024 data_used: 15360000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7944960
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e73ee5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 7979008 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e7941860
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147625 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 9601024 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e7b80960
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e7ca3c20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7d88780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e6724780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.788125992s of 30.815643311s, submitted: 11
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 10158080 heap: 121282560 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e6256780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c28000 session 0x55c4e82e6000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7d89c20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7624f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e73ef680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9ff6000/0x0/0x4ffc00000, data 0x15b6d79/0x1676000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188942 data_alloc: 234881024 data_used: 11841536
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07000 session 0x55c4e83361e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e794e000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 13991936 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e73f0d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7629c20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111525888 unmapped: 14041088 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 13975552 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15b6dac/0x1678000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197713 data_alloc: 234881024 data_used: 12099584
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 13975552 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15b6dac/0x1678000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208505 data_alloc: 234881024 data_used: 13713408
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 13893632 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.291709900s of 17.377859116s, submitted: 43
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282921 data_alloc: 234881024 data_used: 14348288
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f96f7000/0x0/0x4ffc00000, data 0x1eaddac/0x1f6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 10297344 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 8699904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 8699904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 8699904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f965b000/0x0/0x4ffc00000, data 0x1f40dac/0x2002000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f965b000/0x0/0x4ffc00000, data 0x1f40dac/0x2002000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 8691712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302963 data_alloc: 234881024 data_used: 14471168
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 8691712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 8691712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 9060352 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 9060352 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9649000/0x0/0x4ffc00000, data 0x1f61dac/0x2023000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 9060352 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294123 data_alloc: 234881024 data_used: 14483456
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.663401604s of 11.935150146s, submitted: 126
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9642000/0x0/0x4ffc00000, data 0x1f68dac/0x202a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 9052160 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 9625600 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294379 data_alloc: 234881024 data_used: 14483456
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 9625600 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9642000/0x0/0x4ffc00000, data 0x1f68dac/0x202a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 9625600 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f963f000/0x0/0x4ffc00000, data 0x1f6bdac/0x202d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f963f000/0x0/0x4ffc00000, data 0x1f6bdac/0x202d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295531 data_alloc: 234881024 data_used: 14512128
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115949568 unmapped: 9617408 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 9609216 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.539818764s of 11.554802895s, submitted: 4
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9631000/0x0/0x4ffc00000, data 0x1f79dac/0x203b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9631000/0x0/0x4ffc00000, data 0x1f79dac/0x203b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296099 data_alloc: 234881024 data_used: 14512128
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 7389184 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2bc00 session 0x55c4e7b9f2c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21400 session 0x55c4e62474a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c17c00 session 0x55c4e7940780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7b9f4a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e74754a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9631000/0x0/0x4ffc00000, data 0x1f79dac/0x203b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 9494528 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323210 data_alloc: 234881024 data_used: 14512128
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92b1000/0x0/0x4ffc00000, data 0x22f9dac/0x23bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92b1000/0x0/0x4ffc00000, data 0x22f9dac/0x23bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324050 data_alloc: 234881024 data_used: 14512128
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 9486336 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.159543991s of 13.274734497s, submitted: 21
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 9478144 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 9437184 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 7208960 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 7208960 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348510 data_alloc: 234881024 data_used: 18182144
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92ae000/0x0/0x4ffc00000, data 0x22fcdac/0x23be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f92ae000/0x0/0x4ffc00000, data 0x22fcdac/0x23be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 7159808 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349270 data_alloc: 234881024 data_used: 18247680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 7151616 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7102464 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7102464 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.494841576s of 11.516628265s, submitted: 5
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120193024 unmapped: 5373952 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f7a000/0x0/0x4ffc00000, data 0x2630dac/0x26f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120193024 unmapped: 5373952 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380476 data_alloc: 234881024 data_used: 18259968
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 5316608 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 5316608 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 5267456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 5267456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f6b000/0x0/0x4ffc00000, data 0x263fdac/0x2701000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 5251072 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379860 data_alloc: 234881024 data_used: 18259968
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 5251072 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f6b000/0x0/0x4ffc00000, data 0x263fdac/0x2701000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 5218304 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379964 data_alloc: 234881024 data_used: 18259968
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 6709248 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8f66000/0x0/0x4ffc00000, data 0x2644dac/0x2706000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.125692368s of 14.222403526s, submitted: 26
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21400 session 0x55c4e7940000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118210560 unmapped: 7356416 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2bc00 session 0x55c4e84443c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f000 session 0x55c4e7c9f4a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307424 data_alloc: 234881024 data_used: 14561280
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1f8ddac/0x204f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1f8ddac/0x204f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1f8ddac/0x204f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307424 data_alloc: 234881024 data_used: 14561280
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118251520 unmapped: 7315456 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73e8d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e73ec1e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75ee960
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b7000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.017325401s of 12.180756569s, submitted: 72
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166819 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f800 session 0x55c4e4fa34a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554ec00 session 0x55c4e4fa3680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e608c000 session 0x55c4e5c7c3c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 10035200 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166835 data_alloc: 234881024 data_used: 10002432
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e6247a40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 10059776 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.982777596s of 10.994009018s, submitted: 4
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166667 data_alloc: 234881024 data_used: 10002432
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 9961472 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [1])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166687 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa3b9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 9715712 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7198f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21000 session 0x55c4e73e8b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e74741e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.336001396s of 10.008099556s, submitted: 246
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e73e9e00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73ef0e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187036 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e67272c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 10878976 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f07400 session 0x55c4e61f25a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75ee000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e75efa40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 10731520 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x13ded07/0x149c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191498 data_alloc: 234881024 data_used: 10010624
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 10723328 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x1402d17/0x14c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x1402d17/0x14c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x1402d17/0x14c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197426 data_alloc: 234881024 data_used: 10801152
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.307585716s of 11.336967468s, submitted: 8
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73e8f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e73ee5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 10764288 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bb400 session 0x55c4e73f0b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f84000/0x0/0x4ffc00000, data 0x1219d17/0x12d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9f84000/0x0/0x4ffc00000, data 0x1219d17/0x12d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171331 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171331 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171331 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 11780096 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7629680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e76294a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e761e5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 11771904 heap: 125566976 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e7bd4000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.225736618s of 16.294603348s, submitted: 21
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671fc00 session 0x55c4e7bd4d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e74663c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e61f30e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e73f12c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6f06000 session 0x55c4e89ca960
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f993c000/0x0/0x4ffc00000, data 0x1861d17/0x1920000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224175 data_alloc: 234881024 data_used: 10006528
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f993c000/0x0/0x4ffc00000, data 0x1861d17/0x1920000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 14622720 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e4f800 session 0x55c4e7625680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e83374a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 14319616 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226187 data_alloc: 234881024 data_used: 10010624
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 14295040 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9918000/0x0/0x4ffc00000, data 0x1885d17/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 11812864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 11812864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 11804672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7b7de00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7ca8f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 11804672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.655078888s of 12.087609291s, submitted: 13
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7940780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 14639104 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178121 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178121 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.407890320s of 18.439193726s, submitted: 12
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 15155200 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 15147008 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: mgrc ms_handle_reset ms_handle_reset con 0x55c4e51e6000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2113101694
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2113101694,v1:192.168.122.100:6801/2113101694]
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: mgrc handle_mgr_configure stats_period=5
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e8455680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7940b40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e7d88000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e61f2f00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c81c00 session 0x55c4e76252c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 15122432 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 15122432 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 15122432 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e79423c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e6727c20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e61f3860
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e761f680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 15114240 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 15106048 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177989 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.496601105s of 37.500679016s, submitted: 1
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 14606336 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 13770752 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 13729792 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f98bb000/0x0/0x4ffc00000, data 0x18e3d07/0x19a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231833 data_alloc: 234881024 data_used: 9707520
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 13860864 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f98b3000/0x0/0x4ffc00000, data 0x18ebd07/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231833 data_alloc: 234881024 data_used: 9707520
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f98b3000/0x0/0x4ffc00000, data 0x18ebd07/0x19a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e608cc00 session 0x55c4e7c972c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231833 data_alloc: 234881024 data_used: 9707520
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.041767120s of 15.136543274s, submitted: 32
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75ef4a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 13852672 heap: 128720896 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180925 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.009258270s of 20.026557922s, submitted: 6
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e619cf00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235495 data_alloc: 234881024 data_used: 9547776
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 19349504 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 17227776 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 16539648 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283375 data_alloc: 234881024 data_used: 14761984
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 16539648 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 16539648 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 16523264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 16523264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119545856 unmapped: 16523264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283375 data_alloc: 234881024 data_used: 14761984
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 16883712 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7940d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c0e800 session 0x55c4e7941680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bac00 session 0x55c4e79403c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 16859136 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e7941e00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.217409134s of 16.274227142s, submitted: 9
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e5589800 session 0x55c4e7b9f680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e6e8c800 session 0x55c4e7b7de00
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c0e800 session 0x55c4e7473680
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c23400 session 0x55c4e74725a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c23400 session 0x55c4e89ca960
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9879000/0x0/0x4ffc00000, data 0x1925d07/0x19e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 17752064 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 17752064 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f93e2000/0x0/0x4ffc00000, data 0x1dbbd17/0x1e7a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 16687104 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370949 data_alloc: 234881024 data_used: 14852096
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 15843328 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 15843328 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 15712256 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2458d17/0x2517000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 15712256 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 12427264 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402519 data_alloc: 234881024 data_used: 18534400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 123674624 unmapped: 12394496 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8d24000/0x0/0x4ffc00000, data 0x2479d17/0x2538000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398031 data_alloc: 234881024 data_used: 18534400
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.642805099s of 14.886832237s, submitted: 69
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 13631488 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8d21000/0x0/0x4ffc00000, data 0x247cd17/0x253b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411793 data_alloc: 234881024 data_used: 18571264
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 13615104 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122634240 unmapped: 13434880 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 13393920 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 13393920 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a6c000/0x0/0x4ffc00000, data 0x2731d17/0x27f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422455 data_alloc: 234881024 data_used: 18698240
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 13377536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a69000/0x0/0x4ffc00000, data 0x2734d17/0x27f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422455 data_alloc: 234881024 data_used: 18698240
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 13344768 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a69000/0x0/0x4ffc00000, data 0x2734d17/0x27f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e89ca5a0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.802055359s of 17.920734406s, submitted: 34
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 122773504 unmapped: 13295616 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8a63000/0x0/0x4ffc00000, data 0x273ad17/0x27f9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bf000 session 0x55c4e7625c20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341159 data_alloc: 234881024 data_used: 14848000
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f91ab000/0x0/0x4ffc00000, data 0x1fefd07/0x20ad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e75eed20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 15425536 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e95e9400 session 0x55c4e73e90e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 20553728 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 20553728 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 20553728 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194248 data_alloc: 218103808 data_used: 7647232
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 20537344 heap: 136069120 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.277734756s of 26.333208084s, submitted: 21
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e47dfc00 session 0x55c4e75eeb40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270812 data_alloc: 218103808 data_used: 7647232
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e671c400 session 0x55c4e7b9f2c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 27779072 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347057 data_alloc: 234881024 data_used: 18173952
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347057 data_alloc: 234881024 data_used: 18173952
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1c94d07/0x1d52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 24166400 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.736631393s of 17.821859360s, submitted: 18
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 18448384 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 18325504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416075 data_alloc: 234881024 data_used: 18780160
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 19308544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417899 data_alloc: 234881024 data_used: 19128320
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 19300352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x25a6d07/0x2664000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417899 data_alloc: 234881024 data_used: 19128320
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 19292160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 19283968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 19283968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c23400 session 0x55c4e7943c20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.364301682s of 16.526765823s, submitted: 69
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e73bf000 session 0x55c4e7ca8780
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417767 data_alloc: 234881024 data_used: 19128320
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2a000 session 0x55c4e73e9a40
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26361856 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26353664 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26345472 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26337280 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26329088 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26320896 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26312704 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config diff' '{prefix=config diff}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config show' '{prefix=config show}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 26034176 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter dump' '{prefix=counter dump}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter schema' '{prefix=counter schema}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 26664960 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26566656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'log dump' '{prefix=log dump}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'perf dump' '{prefix=perf dump}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26566656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'perf schema' '{prefix=perf schema}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 27156480 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 27156480 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 27156480 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 27148288 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27709 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 27140096 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 27131904 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 27123712 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 27115520 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 27115520 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 27115520 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 27115520 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 27115520 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 27115520 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 27107328 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 27099136 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 27090944 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 27082752 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 27082752 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 27082752 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 27082752 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 27082752 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 27082752 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 27074560 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 27066368 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 27058176 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 27058176 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2972 syncs, 3.65 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1698 writes, 6024 keys, 1698 commit groups, 1.0 writes per commit group, ingest: 7.23 MB, 0.01 MB/s#012Interval WAL: 1698 writes, 717 syncs, 2.37 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 27049984 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 27041792 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 27033600 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 27025408 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 27017216 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 27009024 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 27000832 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 26992640 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 26984448 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 26976256 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 26968064 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116457472 unmapped: 26959872 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 26951680 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 26927104 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 26943488 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 26935296 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 349.484527588s of 349.556030273s, submitted: 24
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117653504 unmapped: 25763840 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 25591808 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 25559040 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 25559040 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 25559040 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 25559040 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 25559040 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 25559040 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 25550848 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 25542656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 25542656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 25542656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 25542656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 25542656 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 25534464 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 25534464 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 25534464 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 25534464 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117882880 unmapped: 25534464 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 25526272 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 25526272 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 25526272 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 25526272 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 25526272 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 25526272 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 25518080 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 25509888 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 25501696 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 25501696 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 25493504 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 25477120 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25468928 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 25460736 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 25452544 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 25444352 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 25436160 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 25427968 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 25419776 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c12800 session 0x55c4e7946d20
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e554f800 session 0x55c4e76243c0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c21400 session 0x55c4e7ca3860
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 ms_handle_reset con 0x55c4e7c2bc00 session 0x55c4e7f2a1e0
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 25411584 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202473 data_alloc: 218103808 data_used: 7122944
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 25403392 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 25387008 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config diff' '{prefix=config diff}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config show' '{prefix=config show}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter dump' '{prefix=counter dump}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter schema' '{prefix=counter schema}'
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 25485312 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: osd.0 156 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0x11f5d07/0x12b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: prioritycache tune_memory target: 4294967296 mapped: 118259712 unmapped: 25157632 heap: 143417344 old mem: 2845415832 new mem: 2845415832
Dec  7 05:28:48 np0005549474 ceph-osd[83033]: do_command 'log dump' '{prefix=log dump}'
Dec  7 05:28:48 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:48 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:48 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:48 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:48.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:48 np0005549474 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 05:28:48 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27596 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27730 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18777 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:48 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:48.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:48 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27614 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  7 05:28:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329416046' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 nova_compute[256753]: 2025-12-07 10:28:49.369 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:49 np0005549474 nova_compute[256753]: 2025-12-07 10:28:49.371 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:49 np0005549474 nova_compute[256753]: 2025-12-07 10:28:49.371 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:28:49 np0005549474 nova_compute[256753]: 2025-12-07 10:28:49.372 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:49 np0005549474 nova_compute[256753]: 2025-12-07 10:28:49.372 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:49 np0005549474 nova_compute[256753]: 2025-12-07 10:28:49.372 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18816 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:49 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:49 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:49 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  7 05:28:49 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693975908' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18819 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27757 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27653 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18837 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:49 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-0-dotugk[74807]: ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:28:49 np0005549474 ceph-mgr[74811]: [prometheus INFO cherrypy.access.139829033814144] ::ffff:192.168.122.100 - - [07/Dec/2025:10:28:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27769 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27674 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18852 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:50 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:50 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:50 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:50.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  7 05:28:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718751001' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27781 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27698 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18876 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27793 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:50 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec  7 05:28:50 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541176876' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27710 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.18894 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/746512737' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27805 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27725 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211362347' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  7 05:28:51 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:51 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:51 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445026552' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  7 05:28:51 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec  7 05:28:51 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27817 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:51 np0005549474 nova_compute[256753]: 2025-12-07 10:28:51.762 256757 DEBUG oslo_service.periodic_task [None req-cd7fceac-a17d-45b0-9696-3fc3c7bce54e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec  7 05:28:51 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656323754' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128542265' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  7 05:28:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27832 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109711398' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  7 05:28:52 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:52 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:52 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:52.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311189908' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  7 05:28:52 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27844 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026681969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec  7 05:28:52 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961768926' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  7 05:28:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:52 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:52 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:52 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:53 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:53 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472621041' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876030530' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:53 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:53 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:53 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:53.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430701624' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  7 05:28:53 np0005549474 systemd[1]: Starting Hostname Service...
Dec  7 05:28:53 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec  7 05:28:53 np0005549474 systemd[1]: Started Hostname Service.
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec  7 05:28:53 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506392686' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  7 05:28:53 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19044 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:54 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec  7 05:28:54 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849601122' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  7 05:28:54 np0005549474 nova_compute[256753]: 2025-12-07 10:28:54.373 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:54 np0005549474 nova_compute[256753]: 2025-12-07 10:28:54.374 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:54 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19056 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:54 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:54 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 05:28:54 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:54.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 05:28:54 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27899 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:54 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19065 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27923 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27929 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27935 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 podman[299637]: 2025-12-07 10:28:55.278021558 +0000 UTC m=+0.091184507 container health_status 76c1a3cc61aef0f80e42aed752eb3ccd3e18c2beab094f59691c39e4993c1882 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  7 05:28:55 np0005549474 podman[299648]: 2025-12-07 10:28:55.283957049 +0000 UTC m=+0.097832318 container health_status ac9af05163754251ae38f77ca1f4079bf2d5b8a9891ac37012121a4dee71979d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  7 05:28:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec  7 05:28:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143688639' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19101 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:55 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  7 05:28:55 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1408: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec  7 05:28:55 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec  7 05:28:55 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405355561' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19119 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:55 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27964 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1409: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27973 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506297419' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27985 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:56 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:56 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:56 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:56.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27991 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.27998 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19167 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.675338575 +0000 UTC m=+0.047787894 container create a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425960593' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  7 05:28:56 np0005549474 systemd[1]: Started libpod-conmon-a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613.scope.
Dec  7 05:28:56 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.649420609 +0000 UTC m=+0.021869948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.750940576 +0000 UTC m=+0.123389905 container init a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.759100299 +0000 UTC m=+0.131549628 container start a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.761607537 +0000 UTC m=+0.134056876 container attach a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 05:28:56 np0005549474 vibrant_edison[300115]: 167 167
Dec  7 05:28:56 np0005549474 systemd[1]: libpod-a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613.scope: Deactivated successfully.
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.765807301 +0000 UTC m=+0.138256630 container died a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 05:28:56 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d3f4d77591cdae480dcee4ab960084e2d192e559f089a5f07fe1f3a1f0039f97-merged.mount: Deactivated successfully.
Dec  7 05:28:56 np0005549474 podman[300095]: 2025-12-07 10:28:56.808951188 +0000 UTC m=+0.181400507 container remove a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 05:28:56 np0005549474 systemd[1]: libpod-conmon-a29b0f3164becd1ff5e03dc8fe39403eeda3eaa01204c3aa762d579cbd48b613.scope: Deactivated successfully.
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  7 05:28:56 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  7 05:28:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28006 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:56.95498407 +0000 UTC m=+0.022912896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:28:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:57.063465407 +0000 UTC m=+0.131394213 container create 58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:28:57 np0005549474 systemd[1]: Started libpod-conmon-58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac.scope.
Dec  7 05:28:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19185 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:57 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:28:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41d28243431a0a767c091489ccb2cdd157c40b4c1383349bfcbcf18a57cdc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41d28243431a0a767c091489ccb2cdd157c40b4c1383349bfcbcf18a57cdc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41d28243431a0a767c091489ccb2cdd157c40b4c1383349bfcbcf18a57cdc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41d28243431a0a767c091489ccb2cdd157c40b4c1383349bfcbcf18a57cdc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:57 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41d28243431a0a767c091489ccb2cdd157c40b4c1383349bfcbcf18a57cdc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:57.16443756 +0000 UTC m=+0.232366396 container init 58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:57.174520135 +0000 UTC m=+0.242448941 container start 58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:57.177440894 +0000 UTC m=+0.245369700 container attach 58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:28:57 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:57.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  7 05:28:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28021 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28043 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  7 05:28:57 np0005549474 admiring_solomon[300231]: --> passed data devices: 0 physical, 1 LVM
Dec  7 05:28:57 np0005549474 admiring_solomon[300231]: --> All data devices are unavailable
Dec  7 05:28:57 np0005549474 systemd[1]: libpod-58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac.scope: Deactivated successfully.
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:57.537001378 +0000 UTC m=+0.604930194 container died 58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 05:28:57 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:57 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:57 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:57 np0005549474 systemd[1]: var-lib-containers-storage-overlay-8a41d28243431a0a767c091489ccb2cdd157c40b4c1383349bfcbcf18a57cdc1-merged.mount: Deactivated successfully.
Dec  7 05:28:57 np0005549474 podman[300183]: 2025-12-07 10:28:57.591317369 +0000 UTC m=+0.659246165 container remove 58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_solomon, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec  7 05:28:57 np0005549474 systemd[1]: libpod-conmon-58a499680567dc0c54ac60c198d9800a3c87eed50889616790653ce7beef82ac.scope: Deactivated successfully.
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  7 05:28:57 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  7 05:28:57 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28051 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:57 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 05:28:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:58 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 05:28:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:58 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 05:28:58 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-2-0-compute-0-bjrqrk[272881]: 07/12/2025 10:28:58 : epoch 69355377 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  7 05:28:58 np0005549474 ceph-mgr[74811]: log_channel(cluster) log [DBG] : pgmap v1410: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1102083252' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.187619147 +0000 UTC m=+0.042795928 container create 83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_burnell, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 05:28:58 np0005549474 systemd[1]: Started libpod-conmon-83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb.scope.
Dec  7 05:28:58 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.171284721 +0000 UTC m=+0.026461522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:28:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28078 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.351521545 +0000 UTC m=+0.206698346 container init 83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.366961526 +0000 UTC m=+0.222138307 container start 83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec  7 05:28:58 np0005549474 affectionate_burnell[300523]: 167 167
Dec  7 05:28:58 np0005549474 systemd[1]: libpod-83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb.scope: Deactivated successfully.
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.432640547 +0000 UTC m=+0.287817348 container attach 83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_burnell, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.433915462 +0000 UTC m=+0.289092253 container died 83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  7 05:28:58 np0005549474 radosgw[96353]: ====== starting new request req=0x7f6df0a9c5d0 =====
Dec  7 05:28:58 np0005549474 radosgw[96353]: ====== req done req=0x7f6df0a9c5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 05:28:58 np0005549474 radosgw[96353]: beast: 0x7f6df0a9c5d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:58.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 05:28:58 np0005549474 systemd[1]: var-lib-containers-storage-overlay-d2e416c01cc9869ada66b3bf732815be61904fb87a0f8784235fd77de7a6eda1-merged.mount: Deactivated successfully.
Dec  7 05:28:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.19287 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:58 np0005549474 podman[300503]: 2025-12-07 10:28:58.478272301 +0000 UTC m=+0.333449082 container remove 83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 05:28:58 np0005549474 systemd[1]: libpod-conmon-83e201c83971db2e97078dc77843df3f37a9945a3081924d55f4cd072b3140bb.scope: Deactivated successfully.
Dec  7 05:28:58 np0005549474 podman[300578]: 2025-12-07 10:28:58.632887437 +0000 UTC m=+0.034743739 container create daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_gagarin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 05:28:58 np0005549474 systemd[1]: Started libpod-conmon-daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1.scope.
Dec  7 05:28:58 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28090 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  7 05:28:58 np0005549474 systemd[1]: Started libcrun container.
Dec  7 05:28:58 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e4e845235a76999f85a3d9e617f21d38ca324e2ca66d8f1b8a9ba8cce8db56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:58 np0005549474 podman[300578]: 2025-12-07 10:28:58.618108173 +0000 UTC m=+0.019964495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 05:28:58 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e4e845235a76999f85a3d9e617f21d38ca324e2ca66d8f1b8a9ba8cce8db56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:58 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e4e845235a76999f85a3d9e617f21d38ca324e2ca66d8f1b8a9ba8cce8db56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:58 np0005549474 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e4e845235a76999f85a3d9e617f21d38ca324e2ca66d8f1b8a9ba8cce8db56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 05:28:58 np0005549474 podman[300578]: 2025-12-07 10:28:58.74779899 +0000 UTC m=+0.149655302 container init daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  7 05:28:58 np0005549474 podman[300578]: 2025-12-07 10:28:58.754184224 +0000 UTC m=+0.156040526 container start daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_gagarin, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 05:28:58 np0005549474 podman[300578]: 2025-12-07 10:28:58.756864447 +0000 UTC m=+0.158720769 container attach daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712757737' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  7 05:28:58 np0005549474 ceph-mon[74516]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  7 05:28:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:58.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:28:59 np0005549474 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-alertmanager-compute-0[105466]: ts=2025-12-07T10:28:59.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]: {
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:    "0": [
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:        {
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "devices": [
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "/dev/loop3"
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            ],
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "lv_name": "ceph_lv0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "lv_size": "21470642176",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=32dc95f1-8dbf-4ad2-8ecd-93489439352c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "lv_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "name": "ceph_lv0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "tags": {
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.block_uuid": "pUKYxQ-Rr6G-rvc6-DPjm-Qwvg-zgQb-UmKkse",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.cephx_lockbox_secret": "",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.cluster_name": "ceph",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.crush_device_class": "",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.encrypted": "0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.osd_fsid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.osd_id": "0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.type": "block",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.vdo": "0",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:                "ceph.with_tpm": "0"
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            },
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "type": "block",
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:            "vg_name": "ceph_vg0"
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:        }
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]:    ]
Dec  7 05:28:59 np0005549474 lucid_gagarin[300613]: }
Dec  7 05:28:59 np0005549474 systemd[1]: libpod-daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1.scope: Deactivated successfully.
Dec  7 05:28:59 np0005549474 podman[300578]: 2025-12-07 10:28:59.06983203 +0000 UTC m=+0.471688352 container died daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 05:28:59 np0005549474 ceph-mgr[74811]: log_channel(audit) log [DBG] : from='client.28136 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 05:28:59 np0005549474 nova_compute[256753]: 2025-12-07 10:28:59.375 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:59 np0005549474 nova_compute[256753]: 2025-12-07 10:28:59.376 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  7 05:28:59 np0005549474 nova_compute[256753]: 2025-12-07 10:28:59.376 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Dec  7 05:28:59 np0005549474 nova_compute[256753]: 2025-12-07 10:28:59.376 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:59 np0005549474 nova_compute[256753]: 2025-12-07 10:28:59.376 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  7 05:28:59 np0005549474 nova_compute[256753]: 2025-12-07 10:28:59.377 256757 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  7 05:28:59 np0005549474 systemd[1]: var-lib-containers-storage-overlay-f4e4e845235a76999f85a3d9e617f21d38ca324e2ca66d8f1b8a9ba8cce8db56-merged.mount: Deactivated successfully.
Dec  7 05:28:59 np0005549474 podman[300578]: 2025-12-07 10:28:59.440119705 +0000 UTC m=+0.841976007 container remove daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 05:28:59 np0005549474 systemd[1]: libpod-conmon-daba42a888fe52f3b29080f96b97d9f006d9e0f46bf3053c885f0361e7c86fd1.scope: Deactivated successfully.
